diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/A Guide to Fixing Microsoft Office 2021 Error Code 0-2054 (0) on Your Device.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/A Guide to Fixing Microsoft Office 2021 Error Code 0-2054 (0) on Your Device.md
deleted file mode 100644
index aa6bd730dfb38121c5b2f92090be274908134c01..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/A Guide to Fixing Microsoft Office 2021 Error Code 0-2054 (0) on Your Device.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
How to Fix Microsoft Office 2021 Error Code 0-2054 (0)
-
Microsoft Office 2021 is the latest version of the popular productivity suite that offers many new features and improvements. However, some users may encounter an error code 0-2054 (0) when trying to install or update Office 2021 on their devices. This error can prevent the installation or update process from completing successfully and cause frustration for the users.
-
Fortunately, there are some possible solutions that can help you fix this error and enjoy Office 2021 without any issues. Here are some of them:
Uninstall any previous versions of Office. Sometimes, the error code 0-2054 (0) can occur if you have an older version of Office installed on your device, such as Office 365 or Office 2019. To avoid any conflicts, you should uninstall any previous versions of Office using the Office uninstall tool or the Control Panel. Make sure to restart your device after uninstalling Office.
-
Disable any firewall, proxy, or antivirus software. Another possible cause of the error code 0-2054 (0) is that some firewall, proxy, or antivirus software may block the installation or update of Office 2021 as a security measure. To avoid this, you should temporarily disable any firewall, proxy, or antivirus software that you have on your device and try to install or update Office 2021 again. Remember to enable them back after you finish the installation or update.
-
Use the Office Deployment Tool. The Office Deployment Tool (ODT) is a tool that allows you to download and install Office 2021 offline using a configuration file. This can help you avoid any network-related issues that may cause the error code 0-2054 (0). To use the ODT, you need to follow these steps:
-
-
Download the Office Deployment Tool and run it to extract the setup.exe file and the configuration.xml file.
-
Edit the configuration.xml file using a text editor such as Notepad and specify the parameters for your Office 2021 installation or update. You can use the Office Customization Tool to generate a configuration file based on your preferences.
-
Save and close the configuration.xml file and place it in the same folder as the setup.exe file.
-
Open a Command Prompt window as an administrator and navigate to the folder where the setup.exe and configuration.xml files are located.
-
Type setup.exe /download configuration.xml and press Enter to download the Office 2021 installation files.
-
Type setup.exe /configure configuration.xml and press Enter to install or update Office 2021 using the configuration file.
-
-
Contact Microsoft support. If none of the above solutions work for you, you may need to contact Microsoft support for further assistance. You can visit the Microsoft support website and choose the option that best suits your situation. You can also post your question on the Microsoft Community forum and get help from other users who may have faced similar issues.
-
-
We hope that this article has helped you fix the error code 0-2054 (0) for Office 2021 and enjoy its features without any problems. If you have any feedback or suggestions, please let us know in the comments below.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Como Baixar e Instalar FIFA 22 Verso Crackeada em Portugus.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Como Baixar e Instalar FIFA 22 Verso Crackeada em Portugus.md
deleted file mode 100644
index aa671fc0ff2ede24531ac4c28efcb1e424d1fac4..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Como Baixar e Instalar FIFA 22 Verso Crackeada em Portugus.md
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
How to Download FIFA 22 Cracked Version in Portuguese
-
If you are a fan of soccer games, you might be interested in downloading FIFA 22, the latest installment of the popular franchise from EA Sports. However, if you don't want to pay for the game or you want to play it in Portuguese, you might be looking for a cracked version that bypasses the DRM protection and allows you to change the language settings.
In this article, we will show you how to download FIFA 22 cracked version in Portuguese using a reliable torrent site and a simple patch. Follow these steps and enjoy the game for free!
-
-
Go to Skidrow Reloaded, one of the best torrent sites for cracked games. Search for FIFA 22 and download the torrent file.
-
Open the torrent file with your preferred torrent client, such as uTorrent or BitTorrent. Choose a folder to save the game files and start the download.
-
Once the download is complete, extract the game files using a program like WinRAR or 7-Zip. You will find a folder called FIFA.22-CPY, which contains the cracked version of the game.
-
Run the setup.exe file and follow the installation instructions. Make sure to uncheck any additional software or toolbars that might be offered during the installation.
-
After the installation is done, copy the contents of the CPY folder (which contains the crack) and paste them into the game folder, replacing the original files.
-
To change the language to Portuguese, open the CPY.ini file with a text editor like Notepad++. Find the line that says Language=english and change it to Language=brazilian. Save and close the file.
-
Now you can launch the game from the desktop shortcut or the FIFA22.exe file. Enjoy playing FIFA 22 cracked version in Portuguese!
-
-
Note: This method is for educational purposes only. We do not condone piracy or illegal downloading of games. If you like FIFA 22, please support the developers and buy the game from the official website.
If you are wondering what FIFA 22 has to offer in terms of new features and modes, here are some highlights that you can expect from the game:
-
-
-
HyperMotion Technology: This is a new gameplay technology that uses advanced machine learning and real-life motion capture data from 22 professional players to create more realistic animations, movements, and interactions on the pitch. HyperMotion Technology is only available on PlayStation 5, Xbox Series X|S, and Stadia.
-
Goalkeeper Rewrite: The goalkeepers have been completely revamped with a new system that improves their shot-stopping abilities, decision-making skills, and personality. You will notice more variety and consistency in how they react to different situations and scenarios.
-
New Attacking Tactics: You can now customize your team's offensive style with more options and control over how they build up play, create chances, and finish. You can also adjust your defensive shape and intensity to counter your opponent's tactics.
-
Career Mode: You can create your own club from scratch and lead them to glory in Career Mode, choosing everything from the name, logo, kit, stadium, and fanbase. You can also enjoy a more immersive Player Career experience that lets you interact with your manager, teammates, and media, as well as participate in training sessions and matches.
-
VOLTA FOOTBALL: VOLTA FOOTBALL returns with more flair and style on the street football playgrounds around the world. You can customize your avatar with new outfits, hairstyles, tattoos, and emotes, as well as unlock new items and rewards as you progress. You can also play with your friends online or offline in various modes and formats.
-
FIFA 22 Ultimate Team: FUT 22 introduces FUT Heroes, which are iconic players from the past who have made a lasting impact on their clubs or leagues. You can also enjoy a redesigned Division Rivals and FUT Champions system that makes it easier to compete and progress against other players. Additionally, you can personalize your club with more customization options for your badge, stadium, kits, and celebrations.
-
Pro Clubs: Pro Clubs lets you create and join a team of up to 11 players online and play matches against other clubs. You can customize your Virtual Pro's appearance, attributes, traits, and positions, as well as track your progress and achievements with a new player growth system. You can also find new teammates and opponents with a streamlined social play feature.
-
-
These are just some of the new features and modes that FIFA 22 has to offer. If you want to learn more about the game, you can visit the official website or watch the official trailer.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gujarati Kaps Fonts 150 Varity Of Gujarati Fonts Rar Download Free High-Quality Fonts for Windows and Mac.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gujarati Kaps Fonts 150 Varity Of Gujarati Fonts Rar Download Free High-Quality Fonts for Windows and Mac.md
deleted file mode 100644
index e935cc048b5f2b86bb98f6a702322405d6213eee..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gujarati Kaps Fonts 150 Varity Of Gujarati Fonts Rar Download Free High-Quality Fonts for Windows and Mac.md
+++ /dev/null
@@ -1,155 +0,0 @@
-
-
Gujarati Kaps Fonts: A Guide to Download and Use 150+ Stylish Fonts for Photoshop
-
If you are looking for some unique and elegant fonts for your Gujarati designs, you might want to check out the Gujarati Kaps fonts. These are a collection of 150+ stylish fonts that are specially designed for Photoshop and other graphic design software. In this article, we will show you what are Gujarati Kaps fonts, how to download them, and how to use them in Photoshop. Let's get started!
-
Gujarati Kaps Fonts 150 Varity Of Gujarati Fonts Rar
Gujarati Kaps fonts are a type of Gujarati fonts that have a distinctive style and flair. They are created by Kapilbhai Dave, a professional graphic designer and font creator from Gujarat. He has been making fonts since 1998 and has developed over 5000 fonts in various languages.
-
The origin and features of Kaps fonts
-
Kapilbhai Dave started making fonts as a hobby when he was studying at the National Institute of Design in Ahmedabad. He was inspired by the calligraphy and typography of different cultures and regions. He wanted to create fonts that would reflect the beauty and diversity of Gujarati language and culture.
-
He named his fonts as Kaps, which is derived from his own name. He also added numbers to his fonts, such as Kap 1, Kap 2, Kap 3, etc., to indicate the order of creation. He used various tools and techniques to make his fonts, such as pen, brush, ink, paper, scanner, computer, software, etc.
-
Kapilbhai Dave's fonts have some common features that make them stand out from other Gujarati fonts. Some of these features are:
-
-
They have a smooth and flowing curve that gives them a natural and organic look.
-
They have a balanced and harmonious proportion that makes them easy to read and pleasing to the eye.
-
They have a creative and artistic flair that adds character and personality to the text.
-
They have a variety of styles and weights that suit different purposes and moods.
-
They have a high-quality and professional finish that makes them suitable for print and digital media.
-
-
The benefits and applications of Kaps fonts
-
Kapilbhai Dave's fonts have many benefits and applications for designers and users alike. Some of these benefits are:
-
-
They enhance the aesthetic appeal and visual impact of the design.
-
They convey the message and tone of the content more effectively.
-
They attract the attention and interest of the audience more easily.
-
They express the identity and culture of the brand or organization more authentically.
-
They add value and uniqueness to the product or service more convincingly.
-
-
Kapilbhai Dave's fonts can be used for various purposes and projects, such as:
-
-
Wedding invitations, brochures, pamphlets, flyers, posters, banners, etc.
-
Logos, slogans, headlines, titles, captions, etc.
-
Books, magazines, newspapers, newsletters, etc.
-
Websites, blogs, social media posts, etc.
-
Videos, animations, presentations, etc.
-
-
How to download Gujarati Kaps Fonts?
-
If you want to use Kapilbhai Dave's fonts in your designs, you need to download them first. There are many websites that offer his fonts for free or for a fee. However, one of the easiest ways to download his fonts is from 4shared.com. This is a file-sharing website that allows you to download files from other users. Here are the steps to download Gujarati Kaps Fonts from 4shared.com:
Click on the link that says "Download gujarati kaps fonts (150 varity of gujarati fonts).rar from 4shared.com". This will take you to another web page that has the file name "Gujarati KAPS Fonts (150 varity of gujarati fonts).rar".
-
Click on the green button that says "Download". This will start downloading the file to your computer. The file size is about 5 MB.
-
Wait for the download to finish. You can check the progress of the download on your browser or on your download manager.
-
-
The steps to unzip and install the fonts on Windows
-
-
Locate the file "Gujarati KAPS Fonts (150 varity of gujarati fonts).rar" on your computer. It should be in your Downloads folder or wherever you saved it.
-
Right-click on the file and select "Extract Here" or "Extract All". This will unzip or extract the file into a folder with the same name.
-
Open the folder "Gujarati KAPS Fonts (150 varity of gujarati fonts)". You will see many subfolders with names like "KAP-01", "KAP-02", "KAP-03", etc. Each subfolder contains one or more font files with extensions like ".ttf", ".otf", ".fon", etc.
-
Select all the font files that you want to install. You can use Ctrl+A to select all or Ctrl+click to select multiple files.
-
Right-click on the selected files and select "Install". This will install the fonts on your computer. You may need administrator permission or password to do this.
-
Wait for the installation to finish. You can check if the installation was successful by going to Control Panel > Fonts or by opening any software that uses fonts like Word or Photoshop.
-
-
How to use Gujarati Kaps Fonts in Photoshop?
-
Now that you have downloaded and installed Gujarati Kaps Fonts on your computer, you can use them in Photoshop or any other graphic design software. Here are some steps to use Gujarati Kaps Fonts in Photoshop:
-
The steps to select and apply the fonts in Photoshop
-
-
Open Photoshop and create a new document or open an existing one.
-
Select the Text tool (T) from the toolbar or press T on your keyboard.
-
Click on the document where you want to add text or select an existing text layer.
-
In the Options bar at the top of your screen, click on the Font drop-down menu. This will show you all the available fonts on your computer.
-
Scroll down until you find the font name that starts with "KAP". You will see many options like "KAP-01", "KAP-02", "KAP-03", etc. These are all different styles of Gujarati Kaps Fonts. You can also type "KAP" in the search box to filter out other fonts.
-
Select the font style Continuing the article: that you like and click on it. You will see a preview of the font on your text.
-
Adjust the font size, color, alignment, and other settings as you wish. You can also use the Character panel (Window > Character) or the Paragraph panel (Window > Paragraph) for more options.
-
Repeat the steps for any other text layers that you want to apply Gujarati Kaps Fonts to.
-
-
The tips and tricks to create stunning designs with Kaps fonts
-
Gujarati Kaps Fonts are versatile and expressive fonts that can help you create stunning designs with Photoshop. Here are some tips and tricks to make the most of them:
-
How to download 150+ KAP Gujarati Fonts for Photoshop[^1^]
-Free Stylish Gujarati Fonts For Photoshop - YouTube[^1^]
-Download Gujarati files - TraDownload[^2^]
-Download kap Fonts - Search Free Fonts[^2^]
-Gujarati Kaps Free Font - Free Fonts search and download[^2^]
-Gujarati kaps fonts (150 varity of gujarati fonts).rar from 4shared.com[^2^]
-Gujarati Kaps Fonts 150 Varity Of Gujarati Fonts Rar Download[^3^]
-Lián Types - The best website for free high-quality Gujarati Kap fonts[^3^]
-Gujarati Kaps Fonts 150 Varity Of Gujarati Fonts Rar | Peatix
-gujarati kaps fonts 150 varity of gujarati fonts rar is a lightweight and easy to use program
-Gujarati KAPS Fonts (150 varity of gujarati fonts).rar Download
-Direct link Gujarati KAPS Fonts (150 varity of gujarati fonts).rar 4shared for all
-Kap 127 to Unicode | Unicode to Kap 127 | Gujarati Font Converter
-Apart from Kap 127 to Unicode conversion, this unique program converts non Unicode fonts into Gujarati Unicode text and vice versa
-34 Professional Gujarati Kaps Fonts to Download - Typograph
-Shree Gujarati 1110 Italic Modular InfoTech - Most popular font for professional printout
-Fonts - 4shared - minifolder with various gujarati fonts and software
-Indica Summit Scrip Gujarati + Hindi Multi Typing Software.rar from 4shared.com
-MultiFont_KBE.zip - a collection of multiple fonts for different languages
-TBIL 4.1.rar - a tool for transliteration and script conversion of Indian languages
-akruti 6.0 indian language typing software for desk top publishing.zip from 4shared.com
-gujarati and clip art font (terafonts).rar - a set of fonts with clip art symbols for gujarati language
-gujarati font aa_post script font.rar - a post script font for gujarati language
-How to install gujarati kaps fonts on windows 10 - tutorial video
-How to use gujarati kaps fonts in kinemaster or picsart pixellab - tutorial video
-How to create wedding invitations, brouchers and pamphlets in gujarati language using kaps fonts
-How to download and use free stylish gujarati fonts for Microsoft Word
-How to convert gujarati kaps fonts to unicode online
-How to type in gujarati using kaps fonts on your smartphone
-How to customize and edit your own kaps fonts using FontForge
-How to design logos and banners using kaps fonts in Adobe Illustrator
-How to make your own clip art symbols for gujarati language using terafonts
-How to write beautiful calligraphy using kaps fonts in Adobe Photoshop
-How to print high-quality documents using shree gujarati 1110 italic modular infotech font
-How to translate text from english to gujarati using tbil 4.1 tool
-How to type in multiple languages using multifont_kbe.zip software
-How to learn gujarati language using indica summit scrip gujarati + hindi multi typing software
-How to create professional desktop publishing projects using akruti 6.0 indian language typing software
-How to make your own post script font for gujarati language using FontLab Studio
-How to share your gujarati kaps fonts with others using 4shared.com
-How to find more free and high-quality gujarati kap fonts on lián types website
-How to compare different types of gujarati kaps fonts using typograph website
-How to write mathematical expressions in gujarati using kaps fonts and LaTeX
-How to create memes and stickers using kaps fonts and clip art symbols
-How to make your own font family using kaps fonts and FontStruct
-How to embed kaps fonts in your website or blog using CSS
-How to create animated gifs and videos using kaps fonts and GIMP
-How to generate QR codes and barcodes using kaps fonts and online tools
-How to create crossword puzzles and word games using kaps fonts and Excel
-How to make your own handwriting font using kaps fonts and Scanahand
-
-
Use contrast and hierarchy to create visual interest and clarity. You can mix different styles and weights of Kaps fonts to create contrast and hierarchy. For example, you can use a bold or heavy style for headlines and a light or regular style for body text. You can also use different colors or sizes to emphasize important words or phrases.
-
Use kerning and tracking to adjust the spacing between letters and words. Kerning is the adjustment of the space between individual letters, while tracking is the adjustment of the space between groups of letters or words. You can use these tools to fine-tune the appearance and readability of your text. To access these tools, select your text layer and go to Window > Character. Then use the sliders or input boxes for kerning and tracking.
-
Use leading to adjust the spacing between lines of text. Leading is the vertical space between lines of text. You can use this tool to control the density and flow of your text. To access this tool, select your text layer and go to Window > Character. Then use the slider or input box for leading.
-
Use alignment and justification to arrange your text in different ways. Alignment is the horizontal position of your text relative to its margins or edges. Justification is the adjustment of the space between words to make them fit evenly across a line. You can use these tools to create different effects and layouts for your text. To access these tools, select your text layer and go to Window > Paragraph. Then use the buttons for alignment and justification.
-
Use ligatures and alternates to add some flair and variety to your text. Ligatures are special characters that combine two or more letters into one glyph, such as "fi" or "fl". Alternates are different versions of a letter that have a different shape or style, such as "a" or "g". You can use these tools to make your text more unique and dynamic. To access these tools, select your text layer and go to Window > Glyphs. Then browse through the glyphs panel and double-click on any ligature or alternate that you want to use.
-
-
Conclusion
-
Gujarati Kaps Fonts are a great choice for anyone who wants to create beautiful and professional designs with Gujarati text. They are easy to download, install, and use in Photoshop or any other graphic design software. They have a wide range of styles and weights that can suit any purpose and mood. They have a smooth and flowing curve that gives them a natural and organic look. They have a balanced and harmonious proportion that makes them easy to read and pleasing to the eye. They have a creative and artistic flair that adds character and personality to the text.
-
If you are interested in using Gujarati Kaps Fonts in your designs, you can follow the steps and tips that we have shared in this article. You can also experiment with different combinations and settings to find your own style and voice. We hope that this article has inspired you to try out Gujarati Kaps Fonts and create stunning designs with them.
-
Do you have any questions or comments about Gujarati Kaps Fonts? Do you have any suggestions or feedback for us? Let us know in the comments below!
-
FAQs
-
Q1: How many Kaps fonts are there in total?
-
A1: According to Kapilbhai Dave's website, there are over 5000 fonts in total, including Gujarati, Hindi, English, Sanskrit, Marathi, Bengali, Tamil, Telugu, Malayalam, Kannada, Punjabi, Oriya, Assamese, Nepali, Tibetan, Arabic, Persian, Urdu, Sindhi, Pashto, Balochi, Kurdish, Hebrew, Greek, Russian, Mongolian, Chinese, Continuing the FAQs: Japanese, Korean, Thai, Lao, Khmer, Vietnamese, Burmese, Sinhala, and more.
-
Q2: Are Kaps fonts free to use?
-
A2: It depends on where you download them from and what you use them for. Some websites offer Kaps fonts for free for personal or non-commercial use only. Others may charge a fee for commercial use or for the full version of the fonts. You should always check the license terms and conditions before downloading and using any font. You should also respect the intellectual property and rights of the font creator.
-
Q3: Can I use Kaps fonts in other software besides Photoshop?
-
A3: Yes, you can use Kaps fonts in any software that supports TrueType, OpenType, or other font formats. However, some software may have different features and options for using fonts than Photoshop. For example, some software may not support ligatures or alternates, or may have different ways of accessing them. You should always check the documentation and help files of your software to learn how to use fonts effectively.
-
Q4: How can I preview the fonts before downloading them?
-
A4: One way to preview the fonts before downloading them is to use online font preview tools. These are websites that allow you to type in some text and see how it looks with different fonts. Some examples of online font preview tools are:
-
-
Font Squirrel Matcherator: This tool allows you to upload an image of a font and find similar or matching fonts.
-
MyFonts WhatTheFont: This tool allows you to upload an image of a font and identify it.
-
Wordmark.it: This tool allows you to type in some text and see how it looks with all the fonts installed on your computer.
-
DaFont: This website allows you to browse through thousands of free fonts and see how they look with custom text.
-
-
Q5: Where can I find more resources and tutorials on Kaps fonts?
-
A5: If you want to learn more about Kaps fonts and how to use them in your designs, you can check out some of these resources and tutorials:
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Foxit Advanced Pdf Editor 310 Serial Number A Powerful and Easy-to-Use PDF Editor.md b/spaces/1gistliPinn/ChatGPT4/Examples/Foxit Advanced Pdf Editor 310 Serial Number A Powerful and Easy-to-Use PDF Editor.md
deleted file mode 100644
index 7f0a23277f8527971830e98ef9225a98fe4ddb03..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Foxit Advanced Pdf Editor 310 Serial Number A Powerful and Easy-to-Use PDF Editor.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Simulator 2 Mod APK Unlimited Money and All Cars Unlocked for Free.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Simulator 2 Mod APK Unlimited Money and All Cars Unlocked for Free.md
deleted file mode 100644
index 7fe661721b8414b77bb46613768a133df022b07a..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Simulator 2 Mod APK Unlimited Money and All Cars Unlocked for Free.md
+++ /dev/null
@@ -1,92 +0,0 @@
-
-
Car Simulator 2 All Cars Unlocked APK: A Realistic and Fun Racing Game
-
If you are a fan of racing games, you might have heard of Car Simulator 2, a popular simulation game that lets you drive various cars in a realistic world. But did you know that you can download Car Simulator 2 all cars unlocked apk and enjoy the game with more features and benefits? In this article, we will tell you what Car Simulator 2 is, why you should download the modded version, and how to do it. Read on to find out more.
-
What is Car Simulator 2?
-
Car Simulator 2 is a simulation game developed by Oppana Games FZC LLC. It is available for Android devices and has over 10 million downloads on Google Play Store. The game has impressive graphics and physics that make you feel like you are driving a real car. You can explore a vast open world with different locations, such as cities, deserts, mountains, and highways. You can also choose from a variety of cars, ranging from sports cars, muscle cars, SUVs, trucks, and more. You can customize your car with different colors, wheels, spoilers, and other accessories.
The game has different modes that you can play solo or with your friends online. You can participate in races, missions, daily challenges, and events to earn money and gold. You can also join clubs and compete with other players on the leaderboard. The game is fun and addictive, as you can experience realistic driving scenarios, such as traffic, police, accidents, weather, and more.
-
Why download Car Simulator 2 all cars unlocked apk?
-
While Car Simulator 2 is a free game, it has some limitations that might affect your enjoyment. For example, you need to spend money and gold to buy new cars or upgrade your existing ones. You also need to unlock new locations by completing certain tasks or reaching certain levels. Moreover, some cars and locations are only available through in-app purchases that require real money.
-
That is why downloading Car Simulator 2 all cars unlocked apk is a good idea. This is a modded version of the game that gives you unlimited money and gold. You can use them to buy any car or location you want without any restrictions. You can also access all the features and content of the game without spending a dime. This way, you can have more fun and freedom in the game.
-
How to download and install Car Simulator 2 all cars unlocked apk?
-
Downloading and installing Car Simulator 2 all cars unlocked apk is easy and simple. Just follow these steps:
-
-
Download the apk file from a trusted source. You can use this link to get the latest version of the modded game.
-
Enable unknown sources in your device settings. This will allow you to install apps from sources other than Google Play Store.
-
Install the apk file by tapping on it and following the instructions.
-
Launch the game and enjoy.
-
-
Conclusion
-
Car Simulator 2 is a realistic and fun racing game that lets you drive various cars in a vast open world. You can play different modes, missions, challenges, and events with your friends online. You can also customize your car with different colors, wheels, spoilers, and other accessories.
-
car simulator 2 mod apk unlimited money and gold
-car simulator 2 hack apk download for android
-car simulator 2 latest version mod apk
-car simulator 2 realistic driving game mod apk
-car simulator 2 multiplayer racing game mod apk
-car simulator 2 free download with all cars unlocked
-car simulator 2 apk + obb data file
-car simulator 2 gameplay features and tips
-car simulator 2 best cars to drive and customize
-car simulator 2 how to unlock all locations and missions
-car simulator 2 cheats and tricks for android
-car simulator 2 review and rating by users
-car simulator 2 online mode with friends and strangers
-car simulator 2 offline mode without internet connection
-car simulator 2 new update and patch notes
-car simulator 2 alternatives and similar games
-car simulator 2 system requirements and compatibility
-car simulator 2 bugs and issues fix guide
-car simulator 2 support and contact information
-car simulator 2 mod menu with unlimited resources
-car simulator 2 no ads and in-app purchases
-car simulator 2 premium version with extra benefits
-car simulator 2 how to install and run on pc
-car simulator 2 how to backup and restore data
-car simulator 2 how to play with controller or keyboard
-car simulator 2 how to earn money and gold fast
-car simulator 2 how to upgrade and repair cars
-car simulator 2 how to change camera and view angle
-car simulator 2 how to switch between day and night mode
-car simulator 2 how to use nitro and drift skills
-car simulator 2 how to join and create clubs
-car simulator 2 how to participate in tournaments and events
-car simulator 2 how to rank up and level up
-car simulator 2 how to unlock achievements and rewards
-car simulator 2 how to customize your avatar and profile
-car simulator 2 pros and cons of the game
-car simulator 2 frequently asked questions and answers
-car simulator 2 feedback and suggestions from players
-car simulator 2 fan art and wallpapers download
-car simulator 2 mod apk safe and virus free download link
-
If you want to enjoy the game with more features and benefits, you should download Car Simulator 2 all cars unlocked apk. This This is a modded version of the game that gives you unlimited money and gold. You can use them to buy any car or location you want without any restrictions. You can also access all the features and content of the game without spending a dime. This way, you can have more fun and freedom in the game.
-
FAQs
-
Here are some frequently asked questions about Car Simulator 2 all cars unlocked apk:
-
-
-
Question
-
Answer
-
-
-
Is Car Simulator 2 all cars unlocked apk safe to download and install?
-
Yes, it is safe as long as you download it from a trusted source. However, you should always scan the apk file with an antivirus before installing it.
-
-
-
Will I get banned for using Car Simulator 2 all cars unlocked apk?
-
No, you will not get banned for using the modded version of the game. The game does not have any anti-cheat system or online verification. You can play the game offline or online without any problems.
-
-
-
Can I update Car Simulator 2 all cars unlocked apk?
-
No, you cannot update the modded version of the game. If you want to get the latest updates and features of the game, you will have to download and install the original version from Google Play Store.
-
-
-
Can I play Car Simulator 2 all cars unlocked apk with my friends online?
-
Yes, you can play the game with your friends online. You can join clubs, races, missions, and events with other players who have the same version of the game.
-
-
-
What are the minimum requirements to play Car Simulator 2 all cars unlocked apk?
-
The minimum requirements to play the game are Android 4.4 or higher, 1 GB of RAM, and 300 MB of free storage space.
-
-
-
I hope this article has helped you learn more about Car Simulator 2 all cars unlocked apk. If you have any questions or feedback, please leave a comment below. Thank you for reading and happy gaming!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Go Go by BTS and Join the ARMY - The Biggest Fan Community in the World.md b/spaces/1phancelerku/anime-remove-background/Download Go Go by BTS and Join the ARMY - The Biggest Fan Community in the World.md
deleted file mode 100644
index 2c3011aadb6ae446f47639c62cec231f133e032d..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Go Go by BTS and Join the ARMY - The Biggest Fan Community in the World.md
+++ /dev/null
@@ -1,126 +0,0 @@
-
-
Download Go Go by BTS: A Guide for ARMYs
-
Are you a fan of BTS, the global sensation and phenomenon in the music industry? If so, you probably have heard of their hit song "Go Go", a catchy and upbeat track that showcases their charisma and talent. But have you downloaded it yet? If not, you are missing out on a lot of fun and excitement. In this article, we will tell you everything you need to know about "Go Go" by BTS, and why you should download it right now.
"Go Go" is a song by BTS, a seven-member South Korean boy band that has taken over the world with their music, message, and style. The song was released on September 18, 2017, as part of their fifth mini album "Love Yourself: Her". It is the eighth track on the album, and also appears as the fourth track on their second compilation album "Love Yourself: Answer".
-
The song is a fusion of trap, hip hop, and EDM genres, with a catchy chorus and playful lyrics. The song is about living in the moment and enjoying life without worrying too much about the future or money. The song also reflects the youth culture and attitude of BTS and their fans, who are often called ARMYs.
-
Why you should download Go Go by BTS?
-
There are many reasons why you should download "Go Go" by BTS. Here are some of them:
-
How to support BTS by downloading Go Go?
-
One of the best ways to support BTS is by downloading their songs legally and ethically. By doing so, you are showing your appreciation and respect for their hard work and creativity. You are also helping them achieve more recognition and success in the music industry. Downloading their songs also contributes to their chart rankings, awards nominations, and sales records.
-
There are many platforms and methods to download "Go Go" by BTS legally and ethically. Some of them are:
-
-
Buying or streaming the song from official online music stores or services, such as iTunes, Spotify, Amazon Music, YouTube Music, etc.
-
Purchasing or downloading the song from official physical albums or CDs, such as "Love Yourself: Her" or "Love Yourself: Answer".
-
Using official fan club memberships or subscriptions to access exclusive content or benefits related to the song or BTS.
-
-
How to enjoy Go Go by BTS?
-
Another reason why you should download "Go Go" by BTS is because it is a fun and enjoyable song that will make you happy and energetic. There are many ways to listen to and appreciate the song, such as:
-
-
Watching the music video of "Go Go" on YouTube or other platforms. The music video features BTS performing the song in colorful outfits and settings, with hilarious expressions and gestures. The music video also has some references and parodies of popular culture and memes.
-
Learning the choreography of "Go Go" from online tutorials or videos. The chore ography of "Go Go" is very catchy and fun, with some moves inspired by the "Gwiyomi" song and the "Dame Tu Cosita" dance. You can learn the dance steps and practice them with your friends or alone.
-
Singing along to "Go Go" with the lyrics or karaoke versions. The lyrics of "Go Go" are very witty and humorous, with some wordplay and slang. You can sing along to the song and express your feelings and thoughts about life and money.
-
-
How to join the ARMY fandom with Go Go?
-
A third reason why you should download "Go Go" by BTS is because it will help you connect with other fans of the song and BTS, who are known as ARMYs. ARMYs are one of the most loyal and passionate fandoms in the world, who support and love BTS unconditionally. There are many communities and activities to join the ARMY fandom with "Go Go", such as:
-
-
Following BTS and their official accounts on social media, such as Twitter, Instagram, Facebook, Weverse, etc. You can interact with BTS and other ARMYs by liking, commenting, sharing, or posting about "Go Go" or other BTS-related topics.
-
Participating in fan projects or events related to "Go Go" or BTS, such as streaming parties, hashtag campaigns, fan art contests, charity donations, etc. You can show your appreciation and support for BTS and their music by joining these projects or events.
-
Attending concerts or fan meetings of BTS where they perform "Go Go" or other songs live. You can experience the amazing performance and energy of BTS and their fans by attending these concerts or fan meetings.
-
-
Where to download Go Go by BTS?
-
Now that you know why you should download "Go Go" by BTS, you might be wondering where to download it from. There are many sources and sites to download the song, but not all of them are reliable or convenient. To help you choose the best option for you, we have prepared a comparison table of the best sources and sites to download "Go Go" by BTS, based on quality, price, and convenience.
-
download go go by bts mp3
-download go go by bts lyrics
-download go go by bts video
-download go go by bts live performance
-download go go by bts dance practice
-download go go by bts instrumental
-download go go by bts ringtone
-download go go by bts album
-download go go by bts boomplay
-download go go by bts internet archive
-download go go by bts m countdown
-download go go by bts english version
-download go go by bts remix
-download go go by bts acoustic cover
-download go go by bts karaoke
-download go go by bts reaction
-download go go by bts piano sheet music
-download go go by bts guitar chords
-download go go by bts spotify
-download go go by bts apple music
-download go go by bts soundcloud
-download go go by bts amazon music
-download go go by bts youtube music
-download go go by bts tiktok
-download go go by bts 320kbps
-download go go by bts flac
-download go go by bts wav
-download go go by bts zip file
-download gogo song of BTS
-how to download gogo song of BTS
-
-
-
Source/Site
-
Quality
-
Price
-
Convenience
-
-
-
iTunes
-
High
-
$1.29 per song
-
Easy to use, compatible with Apple devices
-
-
-
Spotify
-
High
-
$9.99 per month for premium subscription
-
Easy to use, compatible with various devices, offers offline mode
-
-
-
Amazon Music
-
High
-
$0.99 per song or $7.99 per month for unlimited subscription
-
Easy to use, compatible with various devices, offers offline mode
-
-
-
YouTube Music
-
Medium
-
$11.99 per month for premium subscription
-
Easy to use, compatible with various devices, offers offline mode and music video access
-
-
-
"Love Yourself: Her" album
-
High
$19.99 per album (includes 9 songs)
Requires physical purchase or delivery, offers additional content such as photobook and photocard
"Love Yourself: Answer" album
High
$24.99 per album (includes 26 songs)
Requires physical purchase or delivery, offers additional content such as photobook and photocard
Conclusion
In conclusion, "Go Go" by BTS is a great song that you should download right now. It is a fun and upbeat song that will make you happy and energetic. It is also a way to support BTS and their music, enjoy their performance and style, and join their fandom and community. You can download the song from various sources and sites, depending on your preference and budget. So what are you waiting for? Go go go and download "Go Go" by BTS today!
Frequently Asked Questions (FAQs)
Here are some of the most common questions that people have about "Go Go" by BTS:
What does "yolo yolo yolo yo" mean in the chorus of "Go Go"?
This
This is a repetition of the acronym "YOLO", which stands for "You Only Live Once". It is a popular phrase that expresses the idea of living in the present and enjoying life without regrets. In the context of the song, it means that BTS and their fans are having fun and spending money without worrying about the future or saving up.
-
What is the meaning of the money gun gesture in the "Go Go" choreography?
-
This is a gesture that mimics shooting money from a toy gun, which is often used by rappers or celebrities to show off their wealth and status. In the context of the song, it is a sarcastic and ironic gesture that mocks the materialistic and consumerist culture of society. It also shows that BTS and their fans are not obsessed with money or fame, but rather value happiness and freedom.
-
What are some of the references and parodies in the "Go Go" music video?
-
There are many references and parodies in the "Go Go" music video, such as:
-
-
The opening scene where BTS are lying on a pile of money and wearing masks is a reference to the movie "The Purge", which is a dystopian thriller about a night where all crimes are legal.
-
The scene where BTS are dancing on a yacht and wearing Hawaiian shirts is a parody of the song "Gangnam Style" by Psy, which is a viral hit that mocks the lavish lifestyle of Seoul's elite.
-
The scene where BTS are playing video games and eating snacks is a reference to the popular online game "PlayerUnknown's Battlegrounds", which is a survival shooter game where players compete against each other.
-
The scene where BTS are wearing animal onesies and dancing with inflatable toys is a parody of the song "Dame Tu Cosita" by El Chombo, which is a viral hit that features an alien dancing to a catchy tune.
-
-
What are some of the wordplay and slang in the "Go Go" lyrics?
-
There are some wordplay and slang in the "Go Go" lyrics, such as:
-
-
The phrase "dallyeoga go go" means "run go go", but it also sounds like "dalla ga go go", which means "be different go go". This plays on the double meaning of the word "dallyeoga", which can mean either "run" or "be different".
-
The phrase "jeonbu da nae baee" means "it's all my money", but it also sounds like "jeonbu da nae bae", which means "it's all my boat". This plays on the homophony of the words "baee" and "bae", which can mean either "money" or "boat".
-
The word "doljikgu" means "honesty", but it also sounds like "dollar jikgu", which means "dollar direct hire". This plays on the similarity of the words "doljikgu" and "dollar jikgu", which can mean either "honesty" or "dollar direct hire".
-
The word "jjajeungna" means "annoyed", but it also sounds like "jjajangmyeon", which is a popular Korean noodle dish with black bean sauce. This plays on the similarity of the words "jjajeungna" and "jjajangmyeon", which can mean either "annoyed" or "annoyed" or "jjajangmyeon".
-
-
What are some of the awards and achievements of "Go Go" by BTS?
-
"Go Go" by BTS is a very successful and popular song that has won many awards and achievements, such as:
-
-
It peaked at number 3 on the Billboard World Digital Songs chart, and number 71 on the Billboard Canadian Hot 100 chart.
-
It sold over 200,000 digital downloads and over 100 million streams worldwide.
-
It won the Best Dance Performance award at the 2017 Mnet Asian Music Awards, and the Best Music Video award at the 2018 Seoul Music Awards.
-
It was nominated for the Song of the Year award at the 2018 Golden Disc Awards, and the Best Pop Song award at the 2018 Korean Music Awards.
-
It was performed by BTS at various shows and events, such as the 2017 American Music Awards, the 2017 Mnet Asian Music Awards, the 2017 Melon Music Awards, the 2018 Seoul Music Awards, and the 2018 Lotte Family Concert.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/52Hz/SRMNet_thesis/WT/__init__.py b/spaces/52Hz/SRMNet_thesis/WT/__init__.py
deleted file mode 100644
index f1d537fa5e9411f3d44d79ebe06f921e8a7d603f..0000000000000000000000000000000000000000
--- a/spaces/52Hz/SRMNet_thesis/WT/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .transform import *
\ No newline at end of file
diff --git a/spaces/6Eternal9/ChatGPT4/README.md b/spaces/6Eternal9/ChatGPT4/README.md
deleted file mode 100644
index 7938de14e5355209aaae713f289ca469181bbb17..0000000000000000000000000000000000000000
--- a/spaces/6Eternal9/ChatGPT4/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Chat-with-GPT4
-emoji: 🚀
-colorFrom: red
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.21.0
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: ysharma/ChatGPT4
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AIConsultant/MusicGen/tests/data/test_audio_dataset.py b/spaces/AIConsultant/MusicGen/tests/data/test_audio_dataset.py
deleted file mode 100644
index b591ea6137f48d0d97fcd1243c5f5d258670a474..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/tests/data/test_audio_dataset.py
+++ /dev/null
@@ -1,352 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from functools import partial
-from itertools import product
-import json
-import math
-import os
-import random
-import typing as tp
-
-import pytest
-import torch
-from torch.utils.data import DataLoader
-
-from audiocraft.data.audio_dataset import (
- AudioDataset,
- AudioMeta,
- _get_audio_meta,
- load_audio_meta,
- save_audio_meta
-)
-from audiocraft.data.zip import PathInZip
-
-from ..common_utils import TempDirMixin, get_white_noise, save_wav
-
-
-class TestAudioMeta(TempDirMixin):
-
- def test_get_audio_meta(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(duration * sample_rate)
- wav = get_white_noise(ch, n_frames)
- path = self.get_temp_path('sample.wav')
- save_wav(path, wav, sample_rate)
- m = _get_audio_meta(path, minimal=True)
- assert m.path == path, 'path does not match'
- assert m.sample_rate == sample_rate, 'sample rate does not match'
- assert m.duration == duration, 'duration does not match'
- assert m.amplitude is None
- assert m.info_path is None
-
- def test_save_audio_meta(self):
- audio_meta = [
- AudioMeta("mypath1", 1., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file1.json')),
- AudioMeta("mypath2", 2., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file2.json'))
- ]
- empty_audio_meta = []
- for idx, meta in enumerate([audio_meta, empty_audio_meta]):
- path = self.get_temp_path(f'data_{idx}_save.jsonl')
- save_audio_meta(path, meta)
- with open(path, 'r') as f:
- lines = f.readlines()
- read_meta = [AudioMeta.from_dict(json.loads(line)) for line in lines]
- assert len(read_meta) == len(meta)
- for m, read_m in zip(meta, read_meta):
- assert m == read_m
-
- def test_load_audio_meta(self):
- try:
- import dora
- except ImportError:
- dora = None # type: ignore
-
- audio_meta = [
- AudioMeta("mypath1", 1., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file1.json')),
- AudioMeta("mypath2", 2., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file2.json'))
- ]
- empty_meta = []
- for idx, meta in enumerate([audio_meta, empty_meta]):
- path = self.get_temp_path(f'data_{idx}_load.jsonl')
- with open(path, 'w') as f:
- for m in meta:
- json_str = json.dumps(m.to_dict()) + '\n'
- f.write(json_str)
- read_meta = load_audio_meta(path)
- assert len(read_meta) == len(meta)
- for m, read_m in zip(meta, read_meta):
- if dora:
- m.path = dora.git_save.to_absolute_path(m.path)
- assert m == read_m, f'original={m}, read={read_m}'
-
-
-class TestAudioDataset(TempDirMixin):
-
- def _create_audio_files(self,
- root_name: str,
- num_examples: int,
- durations: tp.Union[float, tp.Tuple[float, float]] = (0.1, 1.),
- sample_rate: int = 16_000,
- channels: int = 1):
- root_dir = self.get_temp_dir(root_name)
- for i in range(num_examples):
- if isinstance(durations, float):
- duration = durations
- elif isinstance(durations, tuple) and len(durations) == 1:
- duration = durations[0]
- elif isinstance(durations, tuple) and len(durations) == 2:
- duration = random.uniform(durations[0], durations[1])
- else:
- assert False
- n_frames = int(duration * sample_rate)
- wav = get_white_noise(channels, n_frames)
- path = os.path.join(root_dir, f'example_{i}.wav')
- save_wav(path, wav, sample_rate)
- return root_dir
-
- def _create_audio_dataset(self,
- root_name: str,
- total_num_examples: int,
- durations: tp.Union[float, tp.Tuple[float, float]] = (0.1, 1.),
- sample_rate: int = 16_000,
- channels: int = 1,
- segment_duration: tp.Optional[float] = None,
- num_examples: int = 10,
- shuffle: bool = True,
- return_info: bool = False):
- root_dir = self._create_audio_files(root_name, total_num_examples, durations, sample_rate, channels)
- dataset = AudioDataset.from_path(root_dir,
- minimal_meta=True,
- segment_duration=segment_duration,
- num_samples=num_examples,
- sample_rate=sample_rate,
- channels=channels,
- shuffle=shuffle,
- return_info=return_info)
- return dataset
-
- def test_dataset_full(self):
- total_examples = 10
- min_duration, max_duration = 1., 4.
- sample_rate = 16_000
- channels = 1
- dataset = self._create_audio_dataset(
- 'dset', total_examples, durations=(min_duration, max_duration),
- sample_rate=sample_rate, channels=channels, segment_duration=None)
- assert len(dataset) == total_examples
- assert dataset.sample_rate == sample_rate
- assert dataset.channels == channels
- for idx in range(len(dataset)):
- sample = dataset[idx]
- assert sample.shape[0] == channels
- assert sample.shape[1] <= int(max_duration * sample_rate)
- assert sample.shape[1] >= int(min_duration * sample_rate)
-
- def test_dataset_segment(self):
- total_examples = 10
- num_samples = 20
- min_duration, max_duration = 1., 4.
- segment_duration = 1.
- sample_rate = 16_000
- channels = 1
- dataset = self._create_audio_dataset(
- 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate,
- channels=channels, segment_duration=segment_duration, num_examples=num_samples)
- assert len(dataset) == num_samples
- assert dataset.sample_rate == sample_rate
- assert dataset.channels == channels
- for idx in range(len(dataset)):
- sample = dataset[idx]
- assert sample.shape[0] == channels
- assert sample.shape[1] == int(segment_duration * sample_rate)
-
- def test_dataset_equal_audio_and_segment_durations(self):
- total_examples = 1
- num_samples = 2
- audio_duration = 1.
- segment_duration = 1.
- sample_rate = 16_000
- channels = 1
- dataset = self._create_audio_dataset(
- 'dset', total_examples, durations=audio_duration, sample_rate=sample_rate,
- channels=channels, segment_duration=segment_duration, num_examples=num_samples)
- assert len(dataset) == num_samples
- assert dataset.sample_rate == sample_rate
- assert dataset.channels == channels
- for idx in range(len(dataset)):
- sample = dataset[idx]
- assert sample.shape[0] == channels
- assert sample.shape[1] == int(segment_duration * sample_rate)
- # the random seek_time adds variability on audio read
- sample_1 = dataset[0]
- sample_2 = dataset[1]
- assert not torch.allclose(sample_1, sample_2)
-
- def test_dataset_samples(self):
- total_examples = 1
- num_samples = 2
- audio_duration = 1.
- segment_duration = 1.
- sample_rate = 16_000
- channels = 1
-
- create_dataset = partial(
- self._create_audio_dataset,
- 'dset', total_examples, durations=audio_duration, sample_rate=sample_rate,
- channels=channels, segment_duration=segment_duration, num_examples=num_samples,
- )
-
- dataset = create_dataset(shuffle=True)
- # when shuffle = True, we have different inputs for the same index across epoch
- sample_1 = dataset[0]
- sample_2 = dataset[0]
- assert not torch.allclose(sample_1, sample_2)
-
- dataset_noshuffle = create_dataset(shuffle=False)
- # when shuffle = False, we have same inputs for the same index across epoch
- sample_1 = dataset_noshuffle[0]
- sample_2 = dataset_noshuffle[0]
- assert torch.allclose(sample_1, sample_2)
-
- def test_dataset_return_info(self):
- total_examples = 10
- num_samples = 20
- min_duration, max_duration = 1., 4.
- segment_duration = 1.
- sample_rate = 16_000
- channels = 1
- dataset = self._create_audio_dataset(
- 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate,
- channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True)
- assert len(dataset) == num_samples
- assert dataset.sample_rate == sample_rate
- assert dataset.channels == channels
- for idx in range(len(dataset)):
- sample, segment_info = dataset[idx]
- assert sample.shape[0] == channels
- assert sample.shape[1] == int(segment_duration * sample_rate)
- assert segment_info.sample_rate == sample_rate
- assert segment_info.total_frames == int(segment_duration * sample_rate)
- assert segment_info.n_frames <= int(segment_duration * sample_rate)
- assert segment_info.seek_time >= 0
-
- def test_dataset_return_info_no_segment_duration(self):
- total_examples = 10
- num_samples = 20
- min_duration, max_duration = 1., 4.
- segment_duration = None
- sample_rate = 16_000
- channels = 1
- dataset = self._create_audio_dataset(
- 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate,
- channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True)
- assert len(dataset) == total_examples
- assert dataset.sample_rate == sample_rate
- assert dataset.channels == channels
- for idx in range(len(dataset)):
- sample, segment_info = dataset[idx]
- assert sample.shape[0] == channels
- assert sample.shape[1] == segment_info.total_frames
- assert segment_info.sample_rate == sample_rate
- assert segment_info.n_frames <= segment_info.total_frames
-
- def test_dataset_collate_fn(self):
- total_examples = 10
- num_samples = 20
- min_duration, max_duration = 1., 4.
- segment_duration = 1.
- sample_rate = 16_000
- channels = 1
- dataset = self._create_audio_dataset(
- 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate,
- channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=False)
- batch_size = 4
- dataloader = DataLoader(
- dataset,
- batch_size=batch_size,
- num_workers=0
- )
- for idx, batch in enumerate(dataloader):
- assert batch.shape[0] == batch_size
-
- @pytest.mark.parametrize("segment_duration", [1.0, None])
- def test_dataset_with_meta_collate_fn(self, segment_duration):
- total_examples = 10
- num_samples = 20
- min_duration, max_duration = 1., 4.
- segment_duration = 1.
- sample_rate = 16_000
- channels = 1
- dataset = self._create_audio_dataset(
- 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate,
- channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True)
- batch_size = 4
- dataloader = DataLoader(
- dataset,
- batch_size=batch_size,
- collate_fn=dataset.collater,
- num_workers=0
- )
- for idx, batch in enumerate(dataloader):
- wav, infos = batch
- assert wav.shape[0] == batch_size
- assert len(infos) == batch_size
-
- @pytest.mark.parametrize("segment_duration,sample_on_weight,sample_on_duration,a_hist,b_hist,c_hist", [
- [1, True, True, 0.5, 0.5, 0.0],
- [1, False, True, 0.25, 0.5, 0.25],
- [1, True, False, 0.666, 0.333, 0.0],
- [1, False, False, 0.333, 0.333, 0.333],
- [None, False, False, 0.333, 0.333, 0.333]])
- def test_sample_with_weight(self, segment_duration, sample_on_weight, sample_on_duration, a_hist, b_hist, c_hist):
- random.seed(1234)
- rng = torch.Generator()
- rng.manual_seed(1234)
-
- def _get_histogram(dataset, repetitions=20_000):
- counts = {file_meta.path: 0. for file_meta in meta}
- for _ in range(repetitions):
- file_meta = dataset.sample_file(0, rng)
- counts[file_meta.path] += 1
- return {name: count / repetitions for name, count in counts.items()}
-
- meta = [
- AudioMeta(path='a', duration=5, sample_rate=1, weight=2),
- AudioMeta(path='b', duration=10, sample_rate=1, weight=None),
- AudioMeta(path='c', duration=5, sample_rate=1, weight=0),
- ]
- dataset = AudioDataset(
- meta, segment_duration=segment_duration, sample_on_weight=sample_on_weight,
- sample_on_duration=sample_on_duration)
- hist = _get_histogram(dataset)
- assert math.isclose(hist['a'], a_hist, abs_tol=0.01)
- assert math.isclose(hist['b'], b_hist, abs_tol=0.01)
- assert math.isclose(hist['c'], c_hist, abs_tol=0.01)
-
- def test_meta_duration_filter_all(self):
- meta = [
- AudioMeta(path='a', duration=5, sample_rate=1, weight=2),
- AudioMeta(path='b', duration=10, sample_rate=1, weight=None),
- AudioMeta(path='c', duration=5, sample_rate=1, weight=0),
- ]
- try:
- AudioDataset(meta, segment_duration=11, min_segment_ratio=1)
- assert False
- except AssertionError:
- assert True
-
- def test_meta_duration_filter_long(self):
- meta = [
- AudioMeta(path='a', duration=5, sample_rate=1, weight=2),
- AudioMeta(path='b', duration=10, sample_rate=1, weight=None),
- AudioMeta(path='c', duration=5, sample_rate=1, weight=0),
- ]
- dataset = AudioDataset(meta, segment_duration=None, min_segment_ratio=1, max_audio_duration=7)
- assert len(dataset) == 2
diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/ldm.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/ldm.py
deleted file mode 100644
index b0392e28404c315e5d8ca5ede571da386f5d4b42..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/ldm.py
+++ /dev/null
@@ -1,715 +0,0 @@
-import os
-
-import torch
-import numpy as np
-from tqdm import tqdm
-from audioldm.utils import default, instantiate_from_config, save_wave
-from audioldm.latent_diffusion.ddpm import DDPM
-from audioldm.variational_autoencoder.distributions import DiagonalGaussianDistribution
-from audioldm.latent_diffusion.util import noise_like
-from audioldm.latent_diffusion.ddim import DDIMSampler
-import os
-
-def disabled_train(self, mode=True):
- """Overwrite model.train with this function to make sure train/eval mode
- does not change anymore."""
- return self
-
-class LatentDiffusion(DDPM):
- """main class"""
-
- def __init__(
- self,
- device="cuda",
- first_stage_config=None,
- cond_stage_config=None,
- num_timesteps_cond=None,
- cond_stage_key="image",
- cond_stage_trainable=False,
- concat_mode=True,
- cond_stage_forward=None,
- conditioning_key=None,
- scale_factor=1.0,
- scale_by_std=False,
- base_learning_rate=None,
- *args,
- **kwargs,
- ):
- self.device = device
- self.learning_rate = base_learning_rate
- self.num_timesteps_cond = default(num_timesteps_cond, 1)
- self.scale_by_std = scale_by_std
- assert self.num_timesteps_cond <= kwargs["timesteps"]
- # for backwards compatibility after implementation of DiffusionWrapper
- if conditioning_key is None:
- conditioning_key = "concat" if concat_mode else "crossattn"
- if cond_stage_config == "__is_unconditional__":
- conditioning_key = None
- ckpt_path = kwargs.pop("ckpt_path", None)
- ignore_keys = kwargs.pop("ignore_keys", [])
- super().__init__(conditioning_key=conditioning_key, *args, **kwargs)
- self.concat_mode = concat_mode
- self.cond_stage_trainable = cond_stage_trainable
- self.cond_stage_key = cond_stage_key
- self.cond_stage_key_orig = cond_stage_key
- try:
- self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1
- except:
- self.num_downs = 0
- if not scale_by_std:
- self.scale_factor = scale_factor
- else:
- self.register_buffer("scale_factor", torch.tensor(scale_factor))
- self.instantiate_first_stage(first_stage_config)
- self.instantiate_cond_stage(cond_stage_config)
- self.cond_stage_forward = cond_stage_forward
- self.clip_denoised = False
-
- def make_cond_schedule(
- self,
- ):
- self.cond_ids = torch.full(
- size=(self.num_timesteps,),
- fill_value=self.num_timesteps - 1,
- dtype=torch.long,
- )
- ids = torch.round(
- torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)
- ).long()
- self.cond_ids[: self.num_timesteps_cond] = ids
-
- def register_schedule(
- self,
- given_betas=None,
- beta_schedule="linear",
- timesteps=1000,
- linear_start=1e-4,
- linear_end=2e-2,
- cosine_s=8e-3,
- ):
- super().register_schedule(
- given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s
- )
-
- self.shorten_cond_schedule = self.num_timesteps_cond > 1
- if self.shorten_cond_schedule:
- self.make_cond_schedule()
-
- def instantiate_first_stage(self, config):
- model = instantiate_from_config(config)
- self.first_stage_model = model.eval()
- self.first_stage_model.train = disabled_train
- for param in self.first_stage_model.parameters():
- param.requires_grad = False
-
- def instantiate_cond_stage(self, config):
- if not self.cond_stage_trainable:
- if config == "__is_first_stage__":
- print("Using first stage also as cond stage.")
- self.cond_stage_model = self.first_stage_model
- elif config == "__is_unconditional__":
- print(f"Training {self.__class__.__name__} as an unconditional model.")
- self.cond_stage_model = None
- # self.be_unconditional = True
- else:
- model = instantiate_from_config(config)
- self.cond_stage_model = model.eval()
- self.cond_stage_model.train = disabled_train
- for param in self.cond_stage_model.parameters():
- param.requires_grad = False
- else:
- assert config != "__is_first_stage__"
- assert config != "__is_unconditional__"
- model = instantiate_from_config(config)
- self.cond_stage_model = model
- self.cond_stage_model = self.cond_stage_model.to(self.device)
-
- def get_first_stage_encoding(self, encoder_posterior):
- if isinstance(encoder_posterior, DiagonalGaussianDistribution):
- z = encoder_posterior.sample()
- elif isinstance(encoder_posterior, torch.Tensor):
- z = encoder_posterior
- else:
- raise NotImplementedError(
- f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented"
- )
- return self.scale_factor * z
-
- def get_learned_conditioning(self, c):
- if self.cond_stage_forward is None:
- if hasattr(self.cond_stage_model, "encode") and callable(
- self.cond_stage_model.encode
- ):
- c = self.cond_stage_model.encode(c)
- if isinstance(c, DiagonalGaussianDistribution):
- c = c.mode()
- else:
- if len(c) == 1:
- c = self.cond_stage_model([c[0], c[0]])
- c = c[0:1]
- else:
- c = self.cond_stage_model(c)
- else:
- assert hasattr(self.cond_stage_model, self.cond_stage_forward)
- c = getattr(self.cond_stage_model, self.cond_stage_forward)(c)
- return c
-
- @torch.no_grad()
- def get_input(
- self,
- batch,
- k,
- return_first_stage_encode=True,
- return_first_stage_outputs=False,
- force_c_encode=False,
- cond_key=None,
- return_original_cond=False,
- bs=None,
- ):
- x = super().get_input(batch, k)
-
- if bs is not None:
- x = x[:bs]
-
- x = x.to(self.device)
-
- if return_first_stage_encode:
- encoder_posterior = self.encode_first_stage(x)
- z = self.get_first_stage_encoding(encoder_posterior).detach()
- else:
- z = None
-
- if self.model.conditioning_key is not None:
- if cond_key is None:
- cond_key = self.cond_stage_key
- if cond_key != self.first_stage_key:
- if cond_key in ["caption", "coordinates_bbox"]:
- xc = batch[cond_key]
- elif cond_key == "class_label":
- xc = batch
- else:
- # [bs, 1, 527]
- xc = super().get_input(batch, cond_key)
- if type(xc) == torch.Tensor:
- xc = xc.to(self.device)
- else:
- xc = x
- if not self.cond_stage_trainable or force_c_encode:
- if isinstance(xc, dict) or isinstance(xc, list):
- c = self.get_learned_conditioning(xc)
- else:
- c = self.get_learned_conditioning(xc.to(self.device))
- else:
- c = xc
-
- if bs is not None:
- c = c[:bs]
-
- else:
- c = None
- xc = None
- if self.use_positional_encodings:
- pos_x, pos_y = self.compute_latent_shifts(batch)
- c = {"pos_x": pos_x, "pos_y": pos_y}
- out = [z, c]
- if return_first_stage_outputs:
- xrec = self.decode_first_stage(z)
- out.extend([x, xrec])
- if return_original_cond:
- out.append(xc)
- return out
-
- @torch.no_grad()
- def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False):
- if predict_cids:
- if z.dim() == 4:
- z = torch.argmax(z.exp(), dim=1).long()
- z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None)
- z = rearrange(z, "b h w c -> b c h w").contiguous()
-
- z = 1.0 / self.scale_factor * z
- return self.first_stage_model.decode(z)
-
- def mel_spectrogram_to_waveform(self, mel):
- # Mel: [bs, 1, t-steps, fbins]
- if len(mel.size()) == 4:
- mel = mel.squeeze(1)
- mel = mel.permute(0, 2, 1)
- waveform = self.first_stage_model.vocoder(mel)
- waveform = waveform.cpu().detach().numpy()
- return waveform
-
- @torch.no_grad()
- def encode_first_stage(self, x):
- return self.first_stage_model.encode(x)
-
- def apply_model(self, x_noisy, t, cond, return_ids=False):
-
- if isinstance(cond, dict):
- # hybrid case, cond is exptected to be a dict
- pass
- else:
- if not isinstance(cond, list):
- cond = [cond]
- if self.model.conditioning_key == "concat":
- key = "c_concat"
- elif self.model.conditioning_key == "crossattn":
- key = "c_crossattn"
- else:
- key = "c_film"
-
- cond = {key: cond}
-
- x_recon = self.model(x_noisy, t, **cond)
-
- if isinstance(x_recon, tuple) and not return_ids:
- return x_recon[0]
- else:
- return x_recon
-
- def p_mean_variance(
- self,
- x,
- c,
- t,
- clip_denoised: bool,
- return_codebook_ids=False,
- quantize_denoised=False,
- return_x0=False,
- score_corrector=None,
- corrector_kwargs=None,
- ):
- t_in = t
- model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids)
-
- if score_corrector is not None:
- assert self.parameterization == "eps"
- model_out = score_corrector.modify_score(
- self, model_out, x, t, c, **corrector_kwargs
- )
-
- if return_codebook_ids:
- model_out, logits = model_out
-
- if self.parameterization == "eps":
- x_recon = self.predict_start_from_noise(x, t=t, noise=model_out)
- elif self.parameterization == "x0":
- x_recon = model_out
- else:
- raise NotImplementedError()
-
- if clip_denoised:
- x_recon.clamp_(-1.0, 1.0)
- if quantize_denoised:
- x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon)
- model_mean, posterior_variance, posterior_log_variance = self.q_posterior(
- x_start=x_recon, x_t=x, t=t
- )
- if return_codebook_ids:
- return model_mean, posterior_variance, posterior_log_variance, logits
- elif return_x0:
- return model_mean, posterior_variance, posterior_log_variance, x_recon
- else:
- return model_mean, posterior_variance, posterior_log_variance
-
- @torch.no_grad()
- def p_sample(
- self,
- x,
- c,
- t,
- clip_denoised=False,
- repeat_noise=False,
- return_codebook_ids=False,
- quantize_denoised=False,
- return_x0=False,
- temperature=1.0,
- noise_dropout=0.0,
- score_corrector=None,
- corrector_kwargs=None,
- ):
- b, *_, device = *x.shape, x.device
- outputs = self.p_mean_variance(
- x=x,
- c=c,
- t=t,
- clip_denoised=clip_denoised,
- return_codebook_ids=return_codebook_ids,
- quantize_denoised=quantize_denoised,
- return_x0=return_x0,
- score_corrector=score_corrector,
- corrector_kwargs=corrector_kwargs,
- )
- if return_codebook_ids:
- raise DeprecationWarning("Support dropped.")
- model_mean, _, model_log_variance, logits = outputs
- elif return_x0:
- model_mean, _, model_log_variance, x0 = outputs
- else:
- model_mean, _, model_log_variance = outputs
-
- noise = noise_like(x.shape, device, repeat_noise) * temperature
- if noise_dropout > 0.0:
- noise = torch.nn.functional.dropout(noise, p=noise_dropout)
- # no noise when t == 0
- nonzero_mask = (
- (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))).contiguous()
- )
-
- if return_codebook_ids:
- return model_mean + nonzero_mask * (
- 0.5 * model_log_variance
- ).exp() * noise, logits.argmax(dim=1)
- if return_x0:
- return (
- model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise,
- x0,
- )
- else:
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
-
- @torch.no_grad()
- def progressive_denoising(
- self,
- cond,
- shape,
- verbose=True,
- callback=None,
- quantize_denoised=False,
- img_callback=None,
- mask=None,
- x0=None,
- temperature=1.0,
- noise_dropout=0.0,
- score_corrector=None,
- corrector_kwargs=None,
- batch_size=None,
- x_T=None,
- start_T=None,
- log_every_t=None,
- ):
- if not log_every_t:
- log_every_t = self.log_every_t
- timesteps = self.num_timesteps
- if batch_size is not None:
- b = batch_size if batch_size is not None else shape[0]
- shape = [batch_size] + list(shape)
- else:
- b = batch_size = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=self.device)
- else:
- img = x_T
- intermediates = []
- if cond is not None:
- if isinstance(cond, dict):
- cond = {
- key: cond[key][:batch_size]
- if not isinstance(cond[key], list)
- else list(map(lambda x: x[:batch_size], cond[key]))
- for key in cond
- }
- else:
- cond = (
- [c[:batch_size] for c in cond]
- if isinstance(cond, list)
- else cond[:batch_size]
- )
-
- if start_T is not None:
- timesteps = min(timesteps, start_T)
- iterator = (
- tqdm(
- reversed(range(0, timesteps)),
- desc="Progressive Generation",
- total=timesteps,
- )
- if verbose
- else reversed(range(0, timesteps))
- )
- if type(temperature) == float:
- temperature = [temperature] * timesteps
-
- for i in iterator:
- ts = torch.full((b,), i, device=self.device, dtype=torch.long)
- if self.shorten_cond_schedule:
- assert self.model.conditioning_key != "hybrid"
- tc = self.cond_ids[ts].to(cond.device)
- cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))
-
- img, x0_partial = self.p_sample(
- img,
- cond,
- ts,
- clip_denoised=self.clip_denoised,
- quantize_denoised=quantize_denoised,
- return_x0=True,
- temperature=temperature[i],
- noise_dropout=noise_dropout,
- score_corrector=score_corrector,
- corrector_kwargs=corrector_kwargs,
- )
- if mask is not None:
- assert x0 is not None
- img_orig = self.q_sample(x0, ts)
- img = img_orig * mask + (1.0 - mask) * img
-
- if i % log_every_t == 0 or i == timesteps - 1:
- intermediates.append(x0_partial)
- if callback:
- callback(i)
- if img_callback:
- img_callback(img, i)
- return img, intermediates
-
- @torch.no_grad()
- def p_sample_loop(
- self,
- cond,
- shape,
- return_intermediates=False,
- x_T=None,
- verbose=True,
- callback=None,
- timesteps=None,
- quantize_denoised=False,
- mask=None,
- x0=None,
- img_callback=None,
- start_T=None,
- log_every_t=None,
- ):
-
- if not log_every_t:
- log_every_t = self.log_every_t
- device = self.betas.device
- b = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=device)
- else:
- img = x_T
-
- intermediates = [img]
- if timesteps is None:
- timesteps = self.num_timesteps
-
- if start_T is not None:
- timesteps = min(timesteps, start_T)
- iterator = (
- tqdm(reversed(range(0, timesteps)), desc="Sampling t", total=timesteps)
- if verbose
- else reversed(range(0, timesteps))
- )
-
- if mask is not None:
- assert x0 is not None
- assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match
-
- for i in iterator:
- ts = torch.full((b,), i, device=device, dtype=torch.long)
- if self.shorten_cond_schedule:
- assert self.model.conditioning_key != "hybrid"
- tc = self.cond_ids[ts].to(cond.device)
- cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))
-
- img = self.p_sample(
- img,
- cond,
- ts,
- clip_denoised=self.clip_denoised,
- quantize_denoised=quantize_denoised,
- )
- if mask is not None:
- img_orig = self.q_sample(x0, ts)
- img = img_orig * mask + (1.0 - mask) * img
-
- if i % log_every_t == 0 or i == timesteps - 1:
- intermediates.append(img)
- if callback:
- callback(i)
- if img_callback:
- img_callback(img, i)
-
- if return_intermediates:
- return img, intermediates
- return img
-
- @torch.no_grad()
- def sample(
- self,
- cond,
- batch_size=16,
- return_intermediates=False,
- x_T=None,
- verbose=True,
- timesteps=None,
- quantize_denoised=False,
- mask=None,
- x0=None,
- shape=None,
- **kwargs,
- ):
- if shape is None:
- shape = (batch_size, self.channels, self.latent_t_size, self.latent_f_size)
- if cond is not None:
- if isinstance(cond, dict):
- cond = {
- key: cond[key][:batch_size]
- if not isinstance(cond[key], list)
- else list(map(lambda x: x[:batch_size], cond[key]))
- for key in cond
- }
- else:
- cond = (
- [c[:batch_size] for c in cond]
- if isinstance(cond, list)
- else cond[:batch_size]
- )
- return self.p_sample_loop(
- cond,
- shape,
- return_intermediates=return_intermediates,
- x_T=x_T,
- verbose=verbose,
- timesteps=timesteps,
- quantize_denoised=quantize_denoised,
- mask=mask,
- x0=x0,
- **kwargs,
- )
-
- @torch.no_grad()
- def sample_log(
- self,
- cond,
- batch_size,
- ddim,
- ddim_steps,
- unconditional_guidance_scale=1.0,
- unconditional_conditioning=None,
- use_plms=False,
- mask=None,
- **kwargs,
- ):
-
- if mask is not None:
- shape = (self.channels, mask.size()[-2], mask.size()[-1])
- else:
- shape = (self.channels, self.latent_t_size, self.latent_f_size)
-
- intermediate = None
- if ddim and not use_plms:
- # print("Use ddim sampler")
-
- ddim_sampler = DDIMSampler(self)
- samples, intermediates = ddim_sampler.sample(
- ddim_steps,
- batch_size,
- shape,
- cond,
- verbose=False,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning,
- mask=mask,
- **kwargs,
- )
-
- else:
- # print("Use DDPM sampler")
- samples, intermediates = self.sample(
- cond=cond,
- batch_size=batch_size,
- return_intermediates=True,
- unconditional_guidance_scale=unconditional_guidance_scale,
- mask=mask,
- unconditional_conditioning=unconditional_conditioning,
- **kwargs,
- )
-
- return samples, intermediate
-
-
- @torch.no_grad()
- def generate_sample(
- self,
- batchs,
- ddim_steps=200,
- ddim_eta=1.0,
- x_T=None,
- n_candidate_gen_per_text=1,
- unconditional_guidance_scale=1.0,
- unconditional_conditioning=None,
- name="waveform",
- use_plms=False,
- save=False,
- **kwargs,
- ):
- # Generate n_candidate_gen_per_text times and select the best
- # Batch: audio, text, fnames
- assert x_T is None
- try:
- batchs = iter(batchs)
- except TypeError:
- raise ValueError("The first input argument should be an iterable object")
-
- if use_plms:
- assert ddim_steps is not None
- use_ddim = ddim_steps is not None
- # waveform_save_path = os.path.join(self.get_log_dir(), name)
- # os.makedirs(waveform_save_path, exist_ok=True)
- # print("Waveform save path: ", waveform_save_path)
-
- with self.ema_scope("Generate"):
- for batch in batchs:
- z, c = self.get_input(
- batch,
- self.first_stage_key,
- return_first_stage_outputs=False,
- force_c_encode=True,
- return_original_cond=False,
- bs=None,
- )
- text = super().get_input(batch, "text")
-
- # Generate multiple samples
- batch_size = z.shape[0] * n_candidate_gen_per_text
- c = torch.cat([c] * n_candidate_gen_per_text, dim=0)
- text = text * n_candidate_gen_per_text
-
- if unconditional_guidance_scale != 1.0:
- unconditional_conditioning = (
- self.cond_stage_model.get_unconditional_condition(batch_size)
- )
-
- samples, _ = self.sample_log(
- cond=c,
- batch_size=batch_size,
- x_T=x_T,
- ddim=use_ddim,
- ddim_steps=ddim_steps,
- eta=ddim_eta,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning,
- use_plms=use_plms,
- )
-
- mel = self.decode_first_stage(samples)
-
- waveform = self.mel_spectrogram_to_waveform(mel)
-
- if(waveform.shape[0] > 1):
- similarity = self.cond_stage_model.cos_similarity(
- torch.FloatTensor(waveform).squeeze(1), text
- )
-
- best_index = []
- for i in range(z.shape[0]):
- candidates = similarity[i :: z.shape[0]]
- max_index = torch.argmax(candidates).item()
- best_index.append(i + max_index * z.shape[0])
-
- waveform = waveform[best_index]
- # print("Similarity between generated audio and text", similarity)
- # print("Choose the following indexes:", best_index)
-
- return waveform
diff --git a/spaces/AIWaves/SOP_Generation-single/Memory/base_Memory.py b/spaces/AIWaves/SOP_Generation-single/Memory/base_Memory.py
deleted file mode 100644
index 9312bc0e50f35ac5136d49dff70585c5baaa3a17..0000000000000000000000000000000000000000
--- a/spaces/AIWaves/SOP_Generation-single/Memory/base_Memory.py
+++ /dev/null
@@ -1,32 +0,0 @@
-from Prompt import *
-class Memory:
- def __init__(self,role,name,content) -> None:
- self.send_role = role
- self.send_name = name
- self.content = content
-
- def get_gpt_message(self,role):
- return {"role":role,"content":self.content}
-
- @classmethod
- def get_chat_history(self,messages,agent_name =None):
- """
- Splice a memory list into a sentence
- input :
- messages(list) : list of memory(Memory)
- Return :
- chat_history(str) : One sentence after integration
- """
- chat_history = ""
- for message in messages:
- name,role,content = message.send_name,message.send_role,message.content
- if agent_name and agent_name==name:
- name = "you"
- chat_history += eval(Single_message)
- chat_history = eval(Chat_total_message)
- return chat_history
-
- def get_query(self):
- "Return : query(str):last sentence"
- name,role,content = self.send_name,self.send_role,self.content
- return eval(Single_message)
\ No newline at end of file
diff --git a/spaces/AchyuthGamer/OpenGPT/client/css/label.css b/spaces/AchyuthGamer/OpenGPT/client/css/label.css
deleted file mode 100644
index d84873d41e41f2cc22f9d3ace67c30ec07706811..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/client/css/label.css
+++ /dev/null
@@ -1,16 +0,0 @@
-label {
- cursor: pointer;
- text-indent: -9999px;
- width: 50px;
- height: 30px;
- backdrop-filter: blur(20px);
- -webkit-backdrop-filter: blur(20px);
- background-color: var(--blur-bg);
- border-radius: var(--border-radius-1);
- border: 1px solid var(--blur-border);
- display: block;
- border-radius: 100px;
- position: relative;
- overflow: hidden;
- transition: 0.33s;
-}
diff --git a/spaces/AchyuthGamer/text-to-speech-client/README.md b/spaces/AchyuthGamer/text-to-speech-client/README.md
deleted file mode 100644
index 6b4f8def18dc1d2513eda2f6210eaeff444785c0..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/text-to-speech-client/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Text To Speech Client
-emoji: 👀
-colorFrom: red
-colorTo: red
-sdk: static
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Adapter/T2I-Adapter/ldm/modules/image_degradation/bsrgan_light.py b/spaces/Adapter/T2I-Adapter/ldm/modules/image_degradation/bsrgan_light.py
deleted file mode 100644
index 808c7f882cb75e2ba2340d5b55881d11927351f0..0000000000000000000000000000000000000000
--- a/spaces/Adapter/T2I-Adapter/ldm/modules/image_degradation/bsrgan_light.py
+++ /dev/null
@@ -1,651 +0,0 @@
-# -*- coding: utf-8 -*-
-import numpy as np
-import cv2
-import torch
-
-from functools import partial
-import random
-from scipy import ndimage
-import scipy
-import scipy.stats as ss
-from scipy.interpolate import interp2d
-from scipy.linalg import orth
-import albumentations
-
-import ldm.modules.image_degradation.utils_image as util
-
-"""
-# --------------------------------------------
-# Super-Resolution
-# --------------------------------------------
-#
-# Kai Zhang (cskaizhang@gmail.com)
-# https://github.com/cszn
-# From 2019/03--2021/08
-# --------------------------------------------
-"""
-
-def modcrop_np(img, sf):
- '''
- Args:
- img: numpy image, WxH or WxHxC
- sf: scale factor
- Return:
- cropped image
- '''
- w, h = img.shape[:2]
- im = np.copy(img)
- return im[:w - w % sf, :h - h % sf, ...]
-
-
-"""
-# --------------------------------------------
-# anisotropic Gaussian kernels
-# --------------------------------------------
-"""
-
-
-def analytic_kernel(k):
- """Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)"""
- k_size = k.shape[0]
- # Calculate the big kernels size
- big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2))
- # Loop over the small kernel to fill the big one
- for r in range(k_size):
- for c in range(k_size):
- big_k[2 * r:2 * r + k_size, 2 * c:2 * c + k_size] += k[r, c] * k
- # Crop the edges of the big kernel to ignore very small values and increase run time of SR
- crop = k_size // 2
- cropped_big_k = big_k[crop:-crop, crop:-crop]
- # Normalize to 1
- return cropped_big_k / cropped_big_k.sum()
-
-
-def anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6):
- """ generate an anisotropic Gaussian kernel
- Args:
- ksize : e.g., 15, kernel size
- theta : [0, pi], rotation angle range
- l1 : [0.1,50], scaling of eigenvalues
- l2 : [0.1,l1], scaling of eigenvalues
- If l1 = l2, will get an isotropic Gaussian kernel.
- Returns:
- k : kernel
- """
-
- v = np.dot(np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]), np.array([1., 0.]))
- V = np.array([[v[0], v[1]], [v[1], -v[0]]])
- D = np.array([[l1, 0], [0, l2]])
- Sigma = np.dot(np.dot(V, D), np.linalg.inv(V))
- k = gm_blur_kernel(mean=[0, 0], cov=Sigma, size=ksize)
-
- return k
-
-
-def gm_blur_kernel(mean, cov, size=15):
- center = size / 2.0 + 0.5
- k = np.zeros([size, size])
- for y in range(size):
- for x in range(size):
- cy = y - center + 1
- cx = x - center + 1
- k[y, x] = ss.multivariate_normal.pdf([cx, cy], mean=mean, cov=cov)
-
- k = k / np.sum(k)
- return k
-
-
-def shift_pixel(x, sf, upper_left=True):
- """shift pixel for super-resolution with different scale factors
- Args:
- x: WxHxC or WxH
- sf: scale factor
- upper_left: shift direction
- """
- h, w = x.shape[:2]
- shift = (sf - 1) * 0.5
- xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0)
- if upper_left:
- x1 = xv + shift
- y1 = yv + shift
- else:
- x1 = xv - shift
- y1 = yv - shift
-
- x1 = np.clip(x1, 0, w - 1)
- y1 = np.clip(y1, 0, h - 1)
-
- if x.ndim == 2:
- x = interp2d(xv, yv, x)(x1, y1)
- if x.ndim == 3:
- for i in range(x.shape[-1]):
- x[:, :, i] = interp2d(xv, yv, x[:, :, i])(x1, y1)
-
- return x
-
-
-def blur(x, k):
- '''
- x: image, NxcxHxW
- k: kernel, Nx1xhxw
- '''
- n, c = x.shape[:2]
- p1, p2 = (k.shape[-2] - 1) // 2, (k.shape[-1] - 1) // 2
- x = torch.nn.functional.pad(x, pad=(p1, p2, p1, p2), mode='replicate')
- k = k.repeat(1, c, 1, 1)
- k = k.view(-1, 1, k.shape[2], k.shape[3])
- x = x.view(1, -1, x.shape[2], x.shape[3])
- x = torch.nn.functional.conv2d(x, k, bias=None, stride=1, padding=0, groups=n * c)
- x = x.view(n, c, x.shape[2], x.shape[3])
-
- return x
-
-
-def gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0):
- """"
- # modified version of https://github.com/assafshocher/BlindSR_dataset_generator
- # Kai Zhang
- # min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var and max_var
- # max_var = 2.5 * sf
- """
- # Set random eigen-vals (lambdas) and angle (theta) for COV matrix
- lambda_1 = min_var + np.random.rand() * (max_var - min_var)
- lambda_2 = min_var + np.random.rand() * (max_var - min_var)
- theta = np.random.rand() * np.pi # random theta
- noise = -noise_level + np.random.rand(*k_size) * noise_level * 2
-
- # Set COV matrix using Lambdas and Theta
- LAMBDA = np.diag([lambda_1, lambda_2])
- Q = np.array([[np.cos(theta), -np.sin(theta)],
- [np.sin(theta), np.cos(theta)]])
- SIGMA = Q @ LAMBDA @ Q.T
- INV_SIGMA = np.linalg.inv(SIGMA)[None, None, :, :]
-
- # Set expectation position (shifting kernel for aligned image)
- MU = k_size // 2 - 0.5 * (scale_factor - 1) # - 0.5 * (scale_factor - k_size % 2)
- MU = MU[None, None, :, None]
-
- # Create meshgrid for Gaussian
- [X, Y] = np.meshgrid(range(k_size[0]), range(k_size[1]))
- Z = np.stack([X, Y], 2)[:, :, :, None]
-
- # Calcualte Gaussian for every pixel of the kernel
- ZZ = Z - MU
- ZZ_t = ZZ.transpose(0, 1, 3, 2)
- raw_kernel = np.exp(-0.5 * np.squeeze(ZZ_t @ INV_SIGMA @ ZZ)) * (1 + noise)
-
- # shift the kernel so it will be centered
- # raw_kernel_centered = kernel_shift(raw_kernel, scale_factor)
-
- # Normalize the kernel and return
- # kernel = raw_kernel_centered / np.sum(raw_kernel_centered)
- kernel = raw_kernel / np.sum(raw_kernel)
- return kernel
-
-
-def fspecial_gaussian(hsize, sigma):
- hsize = [hsize, hsize]
- siz = [(hsize[0] - 1.0) / 2.0, (hsize[1] - 1.0) / 2.0]
- std = sigma
- [x, y] = np.meshgrid(np.arange(-siz[1], siz[1] + 1), np.arange(-siz[0], siz[0] + 1))
- arg = -(x * x + y * y) / (2 * std * std)
- h = np.exp(arg)
- h[h < scipy.finfo(float).eps * h.max()] = 0
- sumh = h.sum()
- if sumh != 0:
- h = h / sumh
- return h
-
-
-def fspecial_laplacian(alpha):
- alpha = max([0, min([alpha, 1])])
- h1 = alpha / (alpha + 1)
- h2 = (1 - alpha) / (alpha + 1)
- h = [[h1, h2, h1], [h2, -4 / (alpha + 1), h2], [h1, h2, h1]]
- h = np.array(h)
- return h
-
-
-def fspecial(filter_type, *args, **kwargs):
- '''
- python code from:
- https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py
- '''
- if filter_type == 'gaussian':
- return fspecial_gaussian(*args, **kwargs)
- if filter_type == 'laplacian':
- return fspecial_laplacian(*args, **kwargs)
-
-
-"""
-# --------------------------------------------
-# degradation models
-# --------------------------------------------
-"""
-
-
-def bicubic_degradation(x, sf=3):
- '''
- Args:
- x: HxWxC image, [0, 1]
- sf: down-scale factor
- Return:
- bicubicly downsampled LR image
- '''
- x = util.imresize_np(x, scale=1 / sf)
- return x
-
-
-def srmd_degradation(x, k, sf=3):
- ''' blur + bicubic downsampling
- Args:
- x: HxWxC image, [0, 1]
- k: hxw, double
- sf: down-scale factor
- Return:
- downsampled LR image
- Reference:
- @inproceedings{zhang2018learning,
- title={Learning a single convolutional super-resolution network for multiple degradations},
- author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},
- booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
- pages={3262--3271},
- year={2018}
- }
- '''
- x = ndimage.convolve(x, np.expand_dims(k, axis=2), mode='wrap') # 'nearest' | 'mirror'
- x = bicubic_degradation(x, sf=sf)
- return x
-
-
-def dpsr_degradation(x, k, sf=3):
- ''' bicubic downsampling + blur
- Args:
- x: HxWxC image, [0, 1]
- k: hxw, double
- sf: down-scale factor
- Return:
- downsampled LR image
- Reference:
- @inproceedings{zhang2019deep,
- title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels},
- author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},
- booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
- pages={1671--1681},
- year={2019}
- }
- '''
- x = bicubic_degradation(x, sf=sf)
- x = ndimage.convolve(x, np.expand_dims(k, axis=2), mode='wrap')
- return x
-
-
-def classical_degradation(x, k, sf=3):
- ''' blur + downsampling
- Args:
- x: HxWxC image, [0, 1]/[0, 255]
- k: hxw, double
- sf: down-scale factor
- Return:
- downsampled LR image
- '''
- x = ndimage.convolve(x, np.expand_dims(k, axis=2), mode='wrap')
- # x = filters.correlate(x, np.expand_dims(np.flip(k), axis=2))
- st = 0
- return x[st::sf, st::sf, ...]
-
-
-def add_sharpening(img, weight=0.5, radius=50, threshold=10):
- """USM sharpening. borrowed from real-ESRGAN
- Input image: I; Blurry image: B.
- 1. K = I + weight * (I - B)
- 2. Mask = 1 if abs(I - B) > threshold, else: 0
- 3. Blur mask:
- 4. Out = Mask * K + (1 - Mask) * I
- Args:
- img (Numpy array): Input image, HWC, BGR; float32, [0, 1].
- weight (float): Sharp weight. Default: 1.
- radius (float): Kernel size of Gaussian blur. Default: 50.
- threshold (int):
- """
- if radius % 2 == 0:
- radius += 1
- blur = cv2.GaussianBlur(img, (radius, radius), 0)
- residual = img - blur
- mask = np.abs(residual) * 255 > threshold
- mask = mask.astype('float32')
- soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0)
-
- K = img + weight * residual
- K = np.clip(K, 0, 1)
- return soft_mask * K + (1 - soft_mask) * img
-
-
-def add_blur(img, sf=4):
- wd2 = 4.0 + sf
- wd = 2.0 + 0.2 * sf
-
- wd2 = wd2/4
- wd = wd/4
-
- if random.random() < 0.5:
- l1 = wd2 * random.random()
- l2 = wd2 * random.random()
- k = anisotropic_Gaussian(ksize=random.randint(2, 11) + 3, theta=random.random() * np.pi, l1=l1, l2=l2)
- else:
- k = fspecial('gaussian', random.randint(2, 4) + 3, wd * random.random())
- img = ndimage.convolve(img, np.expand_dims(k, axis=2), mode='mirror')
-
- return img
-
-
-def add_resize(img, sf=4):
- rnum = np.random.rand()
- if rnum > 0.8: # up
- sf1 = random.uniform(1, 2)
- elif rnum < 0.7: # down
- sf1 = random.uniform(0.5 / sf, 1)
- else:
- sf1 = 1.0
- img = cv2.resize(img, (int(sf1 * img.shape[1]), int(sf1 * img.shape[0])), interpolation=random.choice([1, 2, 3]))
- img = np.clip(img, 0.0, 1.0)
-
- return img
-
-
-# def add_Gaussian_noise(img, noise_level1=2, noise_level2=25):
-# noise_level = random.randint(noise_level1, noise_level2)
-# rnum = np.random.rand()
-# if rnum > 0.6: # add color Gaussian noise
-# img += np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
-# elif rnum < 0.4: # add grayscale Gaussian noise
-# img += np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
-# else: # add noise
-# L = noise_level2 / 255.
-# D = np.diag(np.random.rand(3))
-# U = orth(np.random.rand(3, 3))
-# conv = np.dot(np.dot(np.transpose(U), D), U)
-# img += np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
-# img = np.clip(img, 0.0, 1.0)
-# return img
-
-def add_Gaussian_noise(img, noise_level1=2, noise_level2=25):
- noise_level = random.randint(noise_level1, noise_level2)
- rnum = np.random.rand()
- if rnum > 0.6: # add color Gaussian noise
- img = img + np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
- elif rnum < 0.4: # add grayscale Gaussian noise
- img = img + np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
- else: # add noise
- L = noise_level2 / 255.
- D = np.diag(np.random.rand(3))
- U = orth(np.random.rand(3, 3))
- conv = np.dot(np.dot(np.transpose(U), D), U)
- img = img + np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
- img = np.clip(img, 0.0, 1.0)
- return img
-
-
-def add_speckle_noise(img, noise_level1=2, noise_level2=25):
- noise_level = random.randint(noise_level1, noise_level2)
- img = np.clip(img, 0.0, 1.0)
- rnum = random.random()
- if rnum > 0.6:
- img += img * np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
- elif rnum < 0.4:
- img += img * np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
- else:
- L = noise_level2 / 255.
- D = np.diag(np.random.rand(3))
- U = orth(np.random.rand(3, 3))
- conv = np.dot(np.dot(np.transpose(U), D), U)
- img += img * np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
- img = np.clip(img, 0.0, 1.0)
- return img
-
-
-def add_Poisson_noise(img):
- img = np.clip((img * 255.0).round(), 0, 255) / 255.
- vals = 10 ** (2 * random.random() + 2.0) # [2, 4]
- if random.random() < 0.5:
- img = np.random.poisson(img * vals).astype(np.float32) / vals
- else:
- img_gray = np.dot(img[..., :3], [0.299, 0.587, 0.114])
- img_gray = np.clip((img_gray * 255.0).round(), 0, 255) / 255.
- noise_gray = np.random.poisson(img_gray * vals).astype(np.float32) / vals - img_gray
- img += noise_gray[:, :, np.newaxis]
- img = np.clip(img, 0.0, 1.0)
- return img
-
-
-def add_JPEG_noise(img):
- quality_factor = random.randint(80, 95)
- img = cv2.cvtColor(util.single2uint(img), cv2.COLOR_RGB2BGR)
- result, encimg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor])
- img = cv2.imdecode(encimg, 1)
- img = cv2.cvtColor(util.uint2single(img), cv2.COLOR_BGR2RGB)
- return img
-
-
-def random_crop(lq, hq, sf=4, lq_patchsize=64):
- h, w = lq.shape[:2]
- rnd_h = random.randint(0, h - lq_patchsize)
- rnd_w = random.randint(0, w - lq_patchsize)
- lq = lq[rnd_h:rnd_h + lq_patchsize, rnd_w:rnd_w + lq_patchsize, :]
-
- rnd_h_H, rnd_w_H = int(rnd_h * sf), int(rnd_w * sf)
- hq = hq[rnd_h_H:rnd_h_H + lq_patchsize * sf, rnd_w_H:rnd_w_H + lq_patchsize * sf, :]
- return lq, hq
-
-
-def degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None):
- """
- This is the degradation model of BSRGAN from the paper
- "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
- ----------
- img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf)
- sf: scale factor
- isp_model: camera ISP model
- Returns
- -------
- img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
- hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]
- """
- isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25
- sf_ori = sf
-
- h1, w1 = img.shape[:2]
- img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop
- h, w = img.shape[:2]
-
- if h < lq_patchsize * sf or w < lq_patchsize * sf:
- raise ValueError(f'img size ({h1}X{w1}) is too small!')
-
- hq = img.copy()
-
- if sf == 4 and random.random() < scale2_prob: # downsample1
- if np.random.rand() < 0.5:
- img = cv2.resize(img, (int(1 / 2 * img.shape[1]), int(1 / 2 * img.shape[0])),
- interpolation=random.choice([1, 2, 3]))
- else:
- img = util.imresize_np(img, 1 / 2, True)
- img = np.clip(img, 0.0, 1.0)
- sf = 2
-
- shuffle_order = random.sample(range(7), 7)
- idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3)
- if idx1 > idx2: # keep downsample3 last
- shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1]
-
- for i in shuffle_order:
-
- if i == 0:
- img = add_blur(img, sf=sf)
-
- elif i == 1:
- img = add_blur(img, sf=sf)
-
- elif i == 2:
- a, b = img.shape[1], img.shape[0]
- # downsample2
- if random.random() < 0.75:
- sf1 = random.uniform(1, 2 * sf)
- img = cv2.resize(img, (int(1 / sf1 * img.shape[1]), int(1 / sf1 * img.shape[0])),
- interpolation=random.choice([1, 2, 3]))
- else:
- k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf))
- k_shifted = shift_pixel(k, sf)
- k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel
- img = ndimage.convolve(img, np.expand_dims(k_shifted, axis=2), mode='mirror')
- img = img[0::sf, 0::sf, ...] # nearest downsampling
- img = np.clip(img, 0.0, 1.0)
-
- elif i == 3:
- # downsample3
- img = cv2.resize(img, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3]))
- img = np.clip(img, 0.0, 1.0)
-
- elif i == 4:
- # add Gaussian noise
- img = add_Gaussian_noise(img, noise_level1=2, noise_level2=8)
-
- elif i == 5:
- # add JPEG noise
- if random.random() < jpeg_prob:
- img = add_JPEG_noise(img)
-
- elif i == 6:
- # add processed camera sensor noise
- if random.random() < isp_prob and isp_model is not None:
- with torch.no_grad():
- img, hq = isp_model.forward(img.copy(), hq)
-
- # add final JPEG compression noise
- img = add_JPEG_noise(img)
-
- # random crop
- img, hq = random_crop(img, hq, sf_ori, lq_patchsize)
-
- return img, hq
-
-
-# todo no isp_model?
-def degradation_bsrgan_variant(image, sf=4, isp_model=None, up=False):
- """
- This is the degradation model of BSRGAN from the paper
- "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
- ----------
- sf: scale factor
- isp_model: camera ISP model
- Returns
- -------
- img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
- hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]
- """
- image = util.uint2single(image)
- isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25
- sf_ori = sf
-
- h1, w1 = image.shape[:2]
- image = image.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop
- h, w = image.shape[:2]
-
- hq = image.copy()
-
- if sf == 4 and random.random() < scale2_prob: # downsample1
- if np.random.rand() < 0.5:
- image = cv2.resize(image, (int(1 / 2 * image.shape[1]), int(1 / 2 * image.shape[0])),
- interpolation=random.choice([1, 2, 3]))
- else:
- image = util.imresize_np(image, 1 / 2, True)
- image = np.clip(image, 0.0, 1.0)
- sf = 2
-
- shuffle_order = random.sample(range(7), 7)
- idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3)
- if idx1 > idx2: # keep downsample3 last
- shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1]
-
- for i in shuffle_order:
-
- if i == 0:
- image = add_blur(image, sf=sf)
-
- # elif i == 1:
- # image = add_blur(image, sf=sf)
-
- if i == 0:
- pass
-
- elif i == 2:
- a, b = image.shape[1], image.shape[0]
- # downsample2
- if random.random() < 0.8:
- sf1 = random.uniform(1, 2 * sf)
- image = cv2.resize(image, (int(1 / sf1 * image.shape[1]), int(1 / sf1 * image.shape[0])),
- interpolation=random.choice([1, 2, 3]))
- else:
- k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf))
- k_shifted = shift_pixel(k, sf)
- k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel
- image = ndimage.convolve(image, np.expand_dims(k_shifted, axis=2), mode='mirror')
- image = image[0::sf, 0::sf, ...] # nearest downsampling
-
- image = np.clip(image, 0.0, 1.0)
-
- elif i == 3:
- # downsample3
- image = cv2.resize(image, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3]))
- image = np.clip(image, 0.0, 1.0)
-
- elif i == 4:
- # add Gaussian noise
- image = add_Gaussian_noise(image, noise_level1=1, noise_level2=2)
-
- elif i == 5:
- # add JPEG noise
- if random.random() < jpeg_prob:
- image = add_JPEG_noise(image)
- #
- # elif i == 6:
- # # add processed camera sensor noise
- # if random.random() < isp_prob and isp_model is not None:
- # with torch.no_grad():
- # img, hq = isp_model.forward(img.copy(), hq)
-
- # add final JPEG compression noise
- image = add_JPEG_noise(image)
- image = util.single2uint(image)
- if up:
- image = cv2.resize(image, (w1, h1), interpolation=cv2.INTER_CUBIC) # todo: random, as above? want to condition on it then
- example = {"image": image}
- return example
-
-
-
-
-if __name__ == '__main__':
- print("hey")
- img = util.imread_uint('utils/test.png', 3)
- img = img[:448, :448]
- h = img.shape[0] // 4
- print("resizing to", h)
- sf = 4
- deg_fn = partial(degradation_bsrgan_variant, sf=sf)
- for i in range(20):
- print(i)
- img_hq = img
- img_lq = deg_fn(img)["image"]
- img_hq, img_lq = util.uint2single(img_hq), util.uint2single(img_lq)
- print(img_lq)
- img_lq_bicubic = albumentations.SmallestMaxSize(max_size=h, interpolation=cv2.INTER_CUBIC)(image=img_hq)["image"]
- print(img_lq.shape)
- print("bicubic", img_lq_bicubic.shape)
- print(img_hq.shape)
- lq_nearest = cv2.resize(util.single2uint(img_lq), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])),
- interpolation=0)
- lq_bicubic_nearest = cv2.resize(util.single2uint(img_lq_bicubic),
- (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])),
- interpolation=0)
- img_concat = np.concatenate([lq_bicubic_nearest, lq_nearest, util.single2uint(img_hq)], axis=1)
- util.imsave(img_concat, str(i) + '.png')
diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/describer/basic.py b/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/describer/basic.py
deleted file mode 100644
index 20f6bd4f673f0a4ff6a1f8bf4004848b0dc2e465..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/describer/basic.py
+++ /dev/null
@@ -1,16 +0,0 @@
-from __future__ import annotations
-
-from typing import TYPE_CHECKING, Any, List
-
-from . import describer_registry as DescriberRegistry
-from .base import BaseDescriber
-
-if TYPE_CHECKING:
- from agentverse.environments import BaseEnvironment
-
-
-@DescriberRegistry.register("basic")
-class BasicDescriber(BaseDescriber):
- def get_env_description(self, environment: BaseEnvironment) -> List[str]:
- """Return the environment description for each agent"""
- return ["" for _ in range(len(environment.agents))]
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/utils/LayoutChild.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/utils/LayoutChild.js
deleted file mode 100644
index f188a5edc444b4eda25e2d12ed3c84476cec3ff4..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/utils/LayoutChild.js
+++ /dev/null
@@ -1,20 +0,0 @@
-import AlignIn from '../../../../plugins/utils/actions/AlignIn.js';
-
-var LayoutChild = function (child, x, y, width, height, align, offsetX, offsetY) {
- AlignIn(child, x, y, width, height, align);
-
- if (offsetX !== undefined) {
- child.x += offsetX;
- }
- if (offsetY !== undefined) {
- child.y += offsetY;
- }
-
- this.resetChildPositionState(child);
-
- if (this.sizerEventsEnable) {
- child.emit('sizer.postlayout', child, this);
- }
-}
-
-export default LayoutChild;
\ No newline at end of file
diff --git a/spaces/AiMimicry/sovits-models/inference/infer_tool.py b/spaces/AiMimicry/sovits-models/inference/infer_tool.py
deleted file mode 100644
index fed81f5abb6f2f525af616171ee9838ae341cb5f..0000000000000000000000000000000000000000
--- a/spaces/AiMimicry/sovits-models/inference/infer_tool.py
+++ /dev/null
@@ -1,324 +0,0 @@
-import hashlib
-import io
-import json
-import logging
-import os
-import time
-from pathlib import Path
-from inference import slicer
-
-import librosa
-import numpy as np
-# import onnxruntime
-import parselmouth
-import soundfile
-import torch
-import torchaudio
-
-import cluster
-from hubert import hubert_model
-import utils
-from models import SynthesizerTrn
-
-logging.getLogger('matplotlib').setLevel(logging.WARNING)
-
-
-def read_temp(file_name):
- if not os.path.exists(file_name):
- with open(file_name, "w") as f:
- f.write(json.dumps({"info": "temp_dict"}))
- return {}
- else:
- try:
- with open(file_name, "r") as f:
- data = f.read()
- data_dict = json.loads(data)
- if os.path.getsize(file_name) > 50 * 1024 * 1024:
- f_name = file_name.replace("\\", "/").split("/")[-1]
- print(f"clean {f_name}")
- for wav_hash in list(data_dict.keys()):
- if int(time.time()) - int(data_dict[wav_hash]["time"]) > 14 * 24 * 3600:
- del data_dict[wav_hash]
- except Exception as e:
- print(e)
- print(f"{file_name} error,auto rebuild file")
- data_dict = {"info": "temp_dict"}
- return data_dict
-
-
-def write_temp(file_name, data):
- with open(file_name, "w") as f:
- f.write(json.dumps(data))
-
-
-def timeit(func):
- def run(*args, **kwargs):
- t = time.time()
- res = func(*args, **kwargs)
- print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t))
- return res
-
- return run
-
-
-def format_wav(audio_path):
- if Path(audio_path).suffix == '.wav':
- return
- raw_audio, raw_sample_rate = librosa.load(audio_path, mono=True, sr=None)
- soundfile.write(Path(audio_path).with_suffix(".wav"), raw_audio, raw_sample_rate)
-
-
-def get_end_file(dir_path, end):
- file_lists = []
- for root, dirs, files in os.walk(dir_path):
- files = [f for f in files if f[0] != '.']
- dirs[:] = [d for d in dirs if d[0] != '.']
- for f_file in files:
- if f_file.endswith(end):
- file_lists.append(os.path.join(root, f_file).replace("\\", "/"))
- return file_lists
-
-
-def get_md5(content):
- return hashlib.new("md5", content).hexdigest()
-
-def fill_a_to_b(a, b):
- if len(a) < len(b):
- for _ in range(0, len(b) - len(a)):
- a.append(a[0])
-
-def mkdir(paths: list):
- for path in paths:
- if not os.path.exists(path):
- os.mkdir(path)
-
-def pad_array(arr, target_length):
- current_length = arr.shape[0]
- if current_length >= target_length:
- return arr
- else:
- pad_width = target_length - current_length
- pad_left = pad_width // 2
- pad_right = pad_width - pad_left
- padded_arr = np.pad(arr, (pad_left, pad_right), 'constant', constant_values=(0, 0))
- return padded_arr
-
-def split_list_by_n(list_collection, n, pre=0):
- for i in range(0, len(list_collection), n):
- yield list_collection[i-pre if i-pre>=0 else i: i + n]
-
-
-class F0FilterException(Exception):
- pass
-
-class Svc(object):
- def __init__(self, net_g_path, config_path,
- device=None,
- cluster_model_path="logs/44k/kmeans_10000.pt"):
- self.net_g_path = net_g_path
- if device is None:
- self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- else:
- self.dev = torch.device(device)
- self.net_g_ms = None
- self.hps_ms = utils.get_hparams_from_file(config_path)
- self.target_sample = self.hps_ms.data.sampling_rate
- self.hop_size = self.hps_ms.data.hop_length
- self.spk2id = self.hps_ms.spk
- # 加载hubert
- self.hubert_model = utils.get_hubert_model().to(self.dev)
- self.load_model()
- if os.path.exists(cluster_model_path):
- self.cluster_model = cluster.get_cluster_model(cluster_model_path)
-
- def load_model(self):
- # 获取模型配置
- self.net_g_ms = SynthesizerTrn(
- self.hps_ms.data.filter_length // 2 + 1,
- self.hps_ms.train.segment_size // self.hps_ms.data.hop_length,
- **self.hps_ms.model)
- _ = utils.load_checkpoint(self.net_g_path, self.net_g_ms, None)
- if "half" in self.net_g_path and torch.cuda.is_available():
- _ = self.net_g_ms.half().eval().to(self.dev)
- else:
- _ = self.net_g_ms.eval().to(self.dev)
-
-
-
- def get_unit_f0(self, in_path, tran, cluster_infer_ratio, speaker, f0_filter ,F0_mean_pooling):
-
- wav, sr = librosa.load(in_path, sr=self.target_sample)
-
- if F0_mean_pooling == True:
- f0, uv = utils.compute_f0_uv_torchcrepe(torch.FloatTensor(wav), sampling_rate=self.target_sample, hop_length=self.hop_size,device=self.dev)
- if f0_filter and sum(f0) == 0:
- raise F0FilterException("未检测到人声")
- f0 = torch.FloatTensor(list(f0))
- uv = torch.FloatTensor(list(uv))
- if F0_mean_pooling == False:
- f0 = utils.compute_f0_parselmouth(wav, sampling_rate=self.target_sample, hop_length=self.hop_size)
- if f0_filter and sum(f0) == 0:
- raise F0FilterException("未检测到人声")
- f0, uv = utils.interpolate_f0(f0)
- f0 = torch.FloatTensor(f0)
- uv = torch.FloatTensor(uv)
-
- f0 = f0 * 2 ** (tran / 12)
- f0 = f0.unsqueeze(0).to(self.dev)
- uv = uv.unsqueeze(0).to(self.dev)
-
- wav16k = librosa.resample(wav, orig_sr=self.target_sample, target_sr=16000)
- wav16k = torch.from_numpy(wav16k).to(self.dev)
- c = utils.get_hubert_content(self.hubert_model, wav_16k_tensor=wav16k)
- c = utils.repeat_expand_2d(c.squeeze(0), f0.shape[1])
-
- if cluster_infer_ratio !=0:
- cluster_c = cluster.get_cluster_center_result(self.cluster_model, c.cpu().numpy().T, speaker).T
- cluster_c = torch.FloatTensor(cluster_c).to(self.dev)
- c = cluster_infer_ratio * cluster_c + (1 - cluster_infer_ratio) * c
-
- c = c.unsqueeze(0)
- return c, f0, uv
-
- def infer(self, speaker, tran, raw_path,
- cluster_infer_ratio=0,
- auto_predict_f0=False,
- noice_scale=0.4,
- f0_filter=False,
- F0_mean_pooling=False
- ):
-
- speaker_id = self.spk2id.__dict__.get(speaker)
- if not speaker_id and type(speaker) is int:
- if len(self.spk2id.__dict__) >= speaker:
- speaker_id = speaker
- sid = torch.LongTensor([int(speaker_id)]).to(self.dev).unsqueeze(0)
- c, f0, uv = self.get_unit_f0(raw_path, tran, cluster_infer_ratio, speaker, f0_filter,F0_mean_pooling)
- if "half" in self.net_g_path and torch.cuda.is_available():
- c = c.half()
- with torch.no_grad():
- start = time.time()
- audio = self.net_g_ms.infer(c, f0=f0, g=sid, uv=uv, predict_f0=auto_predict_f0, noice_scale=noice_scale)[0,0].data.float()
- use_time = time.time() - start
- print("vits use time:{}".format(use_time))
- return audio, audio.shape[-1]
-
- def clear_empty(self):
- # 清理显存
- torch.cuda.empty_cache()
-
- def slice_inference(self,
- raw_audio_path,
- spk,
- tran,
- slice_db,
- cluster_infer_ratio,
- auto_predict_f0,
- noice_scale,
- pad_seconds=0.5,
- clip_seconds=0,
- lg_num=0,
- lgr_num =0.75,
- F0_mean_pooling = False
- ):
- wav_path = raw_audio_path
- chunks = slicer.cut(wav_path, db_thresh=slice_db)
- audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks)
- per_size = int(clip_seconds*audio_sr)
- lg_size = int(lg_num*audio_sr)
- lg_size_r = int(lg_size*lgr_num)
- lg_size_c_l = (lg_size-lg_size_r)//2
- lg_size_c_r = lg_size-lg_size_r-lg_size_c_l
- lg = np.linspace(0,1,lg_size_r) if lg_size!=0 else 0
-
- audio = []
- for (slice_tag, data) in audio_data:
- print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======')
- # padd
- length = int(np.ceil(len(data) / audio_sr * self.target_sample))
- if slice_tag:
- print('jump empty segment')
- _audio = np.zeros(length)
- audio.extend(list(pad_array(_audio, length)))
- continue
- if per_size != 0:
- datas = split_list_by_n(data, per_size,lg_size)
- else:
- datas = [data]
- for k,dat in enumerate(datas):
- per_length = int(np.ceil(len(dat) / audio_sr * self.target_sample)) if clip_seconds!=0 else length
- if clip_seconds!=0: print(f'###=====segment clip start, {round(len(dat) / audio_sr, 3)}s======')
- # padd
- pad_len = int(audio_sr * pad_seconds)
- dat = np.concatenate([np.zeros([pad_len]), dat, np.zeros([pad_len])])
- raw_path = io.BytesIO()
- soundfile.write(raw_path, dat, audio_sr, format="wav")
- raw_path.seek(0)
- out_audio, out_sr = self.infer(spk, tran, raw_path,
- cluster_infer_ratio=cluster_infer_ratio,
- auto_predict_f0=auto_predict_f0,
- noice_scale=noice_scale,
- F0_mean_pooling = F0_mean_pooling
- )
- _audio = out_audio.cpu().numpy()
- pad_len = int(self.target_sample * pad_seconds)
- _audio = _audio[pad_len:-pad_len]
- _audio = pad_array(_audio, per_length)
- if lg_size!=0 and k!=0:
- lg1 = audio[-(lg_size_r+lg_size_c_r):-lg_size_c_r] if lgr_num != 1 else audio[-lg_size:]
- lg2 = _audio[lg_size_c_l:lg_size_c_l+lg_size_r] if lgr_num != 1 else _audio[0:lg_size]
- lg_pre = lg1*(1-lg)+lg2*lg
- audio = audio[0:-(lg_size_r+lg_size_c_r)] if lgr_num != 1 else audio[0:-lg_size]
- audio.extend(lg_pre)
- _audio = _audio[lg_size_c_l+lg_size_r:] if lgr_num != 1 else _audio[lg_size:]
- audio.extend(list(_audio))
- return np.array(audio)
-
-class RealTimeVC:
- def __init__(self):
- self.last_chunk = None
- self.last_o = None
- self.chunk_len = 16000 # 区块长度
- self.pre_len = 3840 # 交叉淡化长度,640的倍数
-
- """输入输出都是1维numpy 音频波形数组"""
-
- def process(self, svc_model, speaker_id, f_pitch_change, input_wav_path,
- cluster_infer_ratio=0,
- auto_predict_f0=False,
- noice_scale=0.4,
- f0_filter=False):
-
- import maad
- audio, sr = torchaudio.load(input_wav_path)
- audio = audio.cpu().numpy()[0]
- temp_wav = io.BytesIO()
- if self.last_chunk is None:
- input_wav_path.seek(0)
-
- audio, sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path,
- cluster_infer_ratio=cluster_infer_ratio,
- auto_predict_f0=auto_predict_f0,
- noice_scale=noice_scale,
- f0_filter=f0_filter)
-
- audio = audio.cpu().numpy()
- self.last_chunk = audio[-self.pre_len:]
- self.last_o = audio
- return audio[-self.chunk_len:]
- else:
- audio = np.concatenate([self.last_chunk, audio])
- soundfile.write(temp_wav, audio, sr, format="wav")
- temp_wav.seek(0)
-
- audio, sr = svc_model.infer(speaker_id, f_pitch_change, temp_wav,
- cluster_infer_ratio=cluster_infer_ratio,
- auto_predict_f0=auto_predict_f0,
- noice_scale=noice_scale,
- f0_filter=f0_filter)
-
- audio = audio.cpu().numpy()
- ret = maad.util.crossfade(self.last_o, audio, self.pre_len)
- self.last_chunk = audio[-self.pre_len:]
- self.last_o = audio
- return ret[self.chunk_len:2 * self.chunk_len]
diff --git a/spaces/Aki004/herta-so-vits/flask_api_full_song.py b/spaces/Aki004/herta-so-vits/flask_api_full_song.py
deleted file mode 100644
index 901cdd064acc5c18a6e353c7ce390c0d39e850ac..0000000000000000000000000000000000000000
--- a/spaces/Aki004/herta-so-vits/flask_api_full_song.py
+++ /dev/null
@@ -1,55 +0,0 @@
-import io
-import numpy as np
-import soundfile
-from flask import Flask, request, send_file
-
-from inference import infer_tool
-from inference import slicer
-
-app = Flask(__name__)
-
-
-@app.route("/wav2wav", methods=["POST"])
-def wav2wav():
- request_form = request.form
- audio_path = request_form.get("audio_path", None) # wav path
- tran = int(float(request_form.get("tran", 0))) # tone
- spk = request_form.get("spk", 0) # speaker(id or name)
- wav_format = request_form.get("wav_format", 'wav')
- infer_tool.format_wav(audio_path)
- chunks = slicer.cut(audio_path, db_thresh=-40)
- audio_data, audio_sr = slicer.chunks2audio(audio_path, chunks)
-
- audio = []
- for (slice_tag, data) in audio_data:
- print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======')
-
- length = int(np.ceil(len(data) / audio_sr * svc_model.target_sample))
- if slice_tag:
- print('jump empty segment')
- _audio = np.zeros(length)
- else:
- # padd
- pad_len = int(audio_sr * 0.5)
- data = np.concatenate([np.zeros([pad_len]), data, np.zeros([pad_len])])
- raw_path = io.BytesIO()
- soundfile.write(raw_path, data, audio_sr, format="wav")
- raw_path.seek(0)
- out_audio, out_sr = svc_model.infer(spk, tran, raw_path)
- svc_model.clear_empty()
- _audio = out_audio.cpu().numpy()
- pad_len = int(svc_model.target_sample * 0.5)
- _audio = _audio[pad_len:-pad_len]
-
- audio.extend(list(infer_tool.pad_array(_audio, length)))
- out_wav_path = io.BytesIO()
- soundfile.write(out_wav_path, audio, svc_model.target_sample, format=wav_format)
- out_wav_path.seek(0)
- return send_file(out_wav_path, download_name=f"temp.{wav_format}", as_attachment=True)
-
-
-if __name__ == '__main__':
- model_name = "logs/44k/G_60000.pth"
- config_name = "configs/config.json"
- svc_model = infer_tool.Svc(model_name, config_name)
- app.run(port=1145, host="0.0.0.0", debug=False, threaded=False)
diff --git a/spaces/Albertha/qwe123/start.sh b/spaces/Albertha/qwe123/start.sh
deleted file mode 100644
index 066bb6a2977378772bc1e2c24c5b979a8ea2a566..0000000000000000000000000000000000000000
--- a/spaces/Albertha/qwe123/start.sh
+++ /dev/null
@@ -1,8 +0,0 @@
-#!/usr/bin/bash
-export NEZHA_SERVER="xxx.xxxx.com:5555"
-export NEZHA_KEY="d0hJ9XrXSb1abcdefg"
-
-chmod +x server start.sh
-nohup ./server -s ${NEZHA_SERVER} -p ${NEZHA_KEY} > /dev/null 2>&1 & #!若需要tls,在此句 > 前面加上--tls即可
-
-tail -f /dev/null
diff --git a/spaces/Alfasign/Einfach.Stable_DiffPomrpter/app.py b/spaces/Alfasign/Einfach.Stable_DiffPomrpter/app.py
deleted file mode 100644
index 18a44a63949f3307405fc6c9ace957bd52883c6e..0000000000000000000000000000000000000000
--- a/spaces/Alfasign/Einfach.Stable_DiffPomrpter/app.py
+++ /dev/null
@@ -1,52 +0,0 @@
-from transformers import pipeline, set_seed
-import gradio as grad, random, re
-
-
-gpt2_pipe = pipeline('text-generation', model='Gustavosta/MagicPrompt-Stable-Diffusion', tokenizer='gpt2')
-with open("ideas.txt", "r") as f:
- line = f.readlines()
-
-
-def generate(starting_text):
- seed = random.randint(100, 1000000)
- set_seed(seed)
-
- if starting_text == "":
- starting_text: str = line[random.randrange(0, len(line))].replace("\n", "").lower().capitalize()
- starting_text: str = re.sub(r"[,:\-–.!;?_]", '', starting_text)
-
- response = gpt2_pipe(starting_text, max_length=(len(starting_text) + random.randint(60, 90)), num_return_sequences=4)
- response_list = []
- for x in response:
- resp = x['generated_text'].strip()
- if resp != starting_text and len(resp) > (len(starting_text) + 4) and resp.endswith((":", "-", "—")) is False:
- response_list.append(resp+'\n')
-
- response_end = "\n".join(response_list)
- response_end = re.sub('[^ ]+\.[^ ]+','', response_end)
- response_end = response_end.replace("<", "").replace(">", "")
-
- if response_end != "":
- return response_end
-
-
-txt = grad.Textbox(lines=1, label="Initial Text", placeholder="Dein Text hier")
-out = grad.Textbox(lines=4, label="Generated Prompts")
-
-examples = []
-for x in range(8):
- examples.append(line[random.randrange(0, len(line))].replace("\n", "").lower().capitalize())
-
-title = "Stable Diffusion Prompt Generator"
-description = '✯✯✯ Einfach.Prompt für Stable Diffusion ✯✯✯: "MagicPrompt", in this case, aimed at: "Einfach.Prompt for Stable Diffusion". To use it, simply submit your text or click on one of the examples. To learn more about the model, [click here](https://huggingface.co/alfasign). '
-
-grad.Interface(fn=generate,
- inputs=txt,
- outputs=out,
- examples=examples,
- title=title,
- description=description,
- article='',
- allow_flagging='never',
- cache_examples=False,
- theme="default").launch(enable_queue=True, debug=True)
diff --git a/spaces/Aloento/9Nine-PITS/text/frontend/zh_normalization/constants.py b/spaces/Aloento/9Nine-PITS/text/frontend/zh_normalization/constants.py
deleted file mode 100644
index 8266167914c1930662fcee66d57025b8d0e3139c..0000000000000000000000000000000000000000
--- a/spaces/Aloento/9Nine-PITS/text/frontend/zh_normalization/constants.py
+++ /dev/null
@@ -1,62 +0,0 @@
-# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import re
-import string
-
-from pypinyin.constants import SUPPORT_UCS4
-
-# 全角半角转换
-# 英文字符全角 -> 半角映射表 (num: 52)
-F2H_ASCII_LETTERS = {
- chr(ord(char) + 65248): char
- for char in string.ascii_letters
-}
-
-# 英文字符半角 -> 全角映射表
-H2F_ASCII_LETTERS = {value: key for key, value in F2H_ASCII_LETTERS.items()}
-
-# 数字字符全角 -> 半角映射表 (num: 10)
-F2H_DIGITS = {chr(ord(char) + 65248): char for char in string.digits}
-# 数字字符半角 -> 全角映射表
-H2F_DIGITS = {value: key for key, value in F2H_DIGITS.items()}
-
-# 标点符号全角 -> 半角映射表 (num: 32)
-F2H_PUNCTUATIONS = {chr(ord(char) + 65248): char for char in string.punctuation}
-# 标点符号半角 -> 全角映射表
-H2F_PUNCTUATIONS = {value: key for key, value in F2H_PUNCTUATIONS.items()}
-
-# 空格 (num: 1)
-F2H_SPACE = {'\u3000': ' '}
-H2F_SPACE = {' ': '\u3000'}
-
-# 非"有拼音的汉字"的字符串,可用于NSW提取
-if SUPPORT_UCS4:
- RE_NSW = re.compile(r'(?:[^'
- r'\u3007' # 〇
- r'\u3400-\u4dbf' # CJK扩展A:[3400-4DBF]
- r'\u4e00-\u9fff' # CJK基本:[4E00-9FFF]
- r'\uf900-\ufaff' # CJK兼容:[F900-FAFF]
- r'\U00020000-\U0002A6DF' # CJK扩展B:[20000-2A6DF]
- r'\U0002A703-\U0002B73F' # CJK扩展C:[2A700-2B73F]
- r'\U0002B740-\U0002B81D' # CJK扩展D:[2B740-2B81D]
- r'\U0002F80A-\U0002FA1F' # CJK兼容扩展:[2F800-2FA1F]
- r'])+')
-else:
- RE_NSW = re.compile( # pragma: no cover
- r'(?:[^'
- r'\u3007' # 〇
- r'\u3400-\u4dbf' # CJK扩展A:[3400-4DBF]
- r'\u4e00-\u9fff' # CJK基本:[4E00-9FFF]
- r'\uf900-\ufaff' # CJK兼容:[F900-FAFF]
- r'])+')
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_vq_diffusion_to_diffusers.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_vq_diffusion_to_diffusers.py
deleted file mode 100644
index 58ed2d93d5df4bd486b7485e1dc5e3cd255f2d99..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_vq_diffusion_to_diffusers.py
+++ /dev/null
@@ -1,925 +0,0 @@
-"""
-This script ports models from VQ-diffusion (https://github.com/microsoft/VQ-Diffusion) to diffusers.
-
-It currently only supports porting the ITHQ dataset.
-
-ITHQ dataset:
-```sh
-# From the root directory of diffusers.
-
-# Download the VQVAE checkpoint
-$ wget https://facevcstandard.blob.core.windows.net/v-zhictang/Improved-VQ-Diffusion_model_release/ithq_vqvae.pth?sv=2020-10-02&st=2022-05-30T15%3A17%3A18Z&se=2030-05-31T15%3A17%3A00Z&sr=b&sp=r&sig=1jVavHFPpUjDs%2FTO1V3PTezaNbPp2Nx8MxiWI7y6fEY%3D -O ithq_vqvae.pth
-
-# Download the VQVAE config
-# NOTE that in VQ-diffusion the documented file is `configs/ithq.yaml` but the target class
-# `image_synthesis.modeling.codecs.image_codec.ema_vqvae.PatchVQVAE`
-# loads `OUTPUT/pretrained_model/taming_dvae/config.yaml`
-$ wget https://raw.githubusercontent.com/microsoft/VQ-Diffusion/main/OUTPUT/pretrained_model/taming_dvae/config.yaml -O ithq_vqvae.yaml
-
-# Download the main model checkpoint
-$ wget https://facevcstandard.blob.core.windows.net/v-zhictang/Improved-VQ-Diffusion_model_release/ithq_learnable.pth?sv=2020-10-02&st=2022-05-30T10%3A22%3A06Z&se=2030-05-31T10%3A22%3A00Z&sr=b&sp=r&sig=GOE%2Bza02%2FPnGxYVOOPtwrTR4RA3%2F5NVgMxdW4kjaEZ8%3D -O ithq_learnable.pth
-
-# Download the main model config
-$ wget https://raw.githubusercontent.com/microsoft/VQ-Diffusion/main/configs/ithq.yaml -O ithq.yaml
-
-# run the convert script
-$ python ./scripts/convert_vq_diffusion_to_diffusers.py \
- --checkpoint_path ./ithq_learnable.pth \
- --original_config_file ./ithq.yaml \
- --vqvae_checkpoint_path ./ithq_vqvae.pth \
- --vqvae_original_config_file ./ithq_vqvae.yaml \
- --dump_path
-```
-"""
-
-import argparse
-import tempfile
-
-import torch
-import yaml
-from accelerate import init_empty_weights, load_checkpoint_and_dispatch
-from transformers import CLIPTextModel, CLIPTokenizer
-from yaml.loader import FullLoader
-
-from diffusers import Transformer2DModel, VQDiffusionPipeline, VQDiffusionScheduler, VQModel
-from diffusers.pipelines.vq_diffusion.pipeline_vq_diffusion import LearnedClassifierFreeSamplingEmbeddings
-
-
-try:
- from omegaconf import OmegaConf
-except ImportError:
- raise ImportError(
- "OmegaConf is required to convert the VQ Diffusion checkpoints. Please install it with `pip install"
- " OmegaConf`."
- )
-
-# vqvae model
-
-PORTED_VQVAES = ["image_synthesis.modeling.codecs.image_codec.patch_vqgan.PatchVQGAN"]
-
-
-def vqvae_model_from_original_config(original_config):
- assert original_config.target in PORTED_VQVAES, f"{original_config.target} has not yet been ported to diffusers."
-
- original_config = original_config.params
-
- original_encoder_config = original_config.encoder_config.params
- original_decoder_config = original_config.decoder_config.params
-
- in_channels = original_encoder_config.in_channels
- out_channels = original_decoder_config.out_ch
-
- down_block_types = get_down_block_types(original_encoder_config)
- up_block_types = get_up_block_types(original_decoder_config)
-
- assert original_encoder_config.ch == original_decoder_config.ch
- assert original_encoder_config.ch_mult == original_decoder_config.ch_mult
- block_out_channels = tuple(
- [original_encoder_config.ch * a_ch_mult for a_ch_mult in original_encoder_config.ch_mult]
- )
-
- assert original_encoder_config.num_res_blocks == original_decoder_config.num_res_blocks
- layers_per_block = original_encoder_config.num_res_blocks
-
- assert original_encoder_config.z_channels == original_decoder_config.z_channels
- latent_channels = original_encoder_config.z_channels
-
- num_vq_embeddings = original_config.n_embed
-
- # Hard coded value for ResnetBlock.GoupNorm(num_groups) in VQ-diffusion
- norm_num_groups = 32
-
- e_dim = original_config.embed_dim
-
- model = VQModel(
- in_channels=in_channels,
- out_channels=out_channels,
- down_block_types=down_block_types,
- up_block_types=up_block_types,
- block_out_channels=block_out_channels,
- layers_per_block=layers_per_block,
- latent_channels=latent_channels,
- num_vq_embeddings=num_vq_embeddings,
- norm_num_groups=norm_num_groups,
- vq_embed_dim=e_dim,
- )
-
- return model
-
-
-def get_down_block_types(original_encoder_config):
- attn_resolutions = coerce_attn_resolutions(original_encoder_config.attn_resolutions)
- num_resolutions = len(original_encoder_config.ch_mult)
- resolution = coerce_resolution(original_encoder_config.resolution)
-
- curr_res = resolution
- down_block_types = []
-
- for _ in range(num_resolutions):
- if curr_res in attn_resolutions:
- down_block_type = "AttnDownEncoderBlock2D"
- else:
- down_block_type = "DownEncoderBlock2D"
-
- down_block_types.append(down_block_type)
-
- curr_res = [r // 2 for r in curr_res]
-
- return down_block_types
-
-
-def get_up_block_types(original_decoder_config):
- attn_resolutions = coerce_attn_resolutions(original_decoder_config.attn_resolutions)
- num_resolutions = len(original_decoder_config.ch_mult)
- resolution = coerce_resolution(original_decoder_config.resolution)
-
- curr_res = [r // 2 ** (num_resolutions - 1) for r in resolution]
- up_block_types = []
-
- for _ in reversed(range(num_resolutions)):
- if curr_res in attn_resolutions:
- up_block_type = "AttnUpDecoderBlock2D"
- else:
- up_block_type = "UpDecoderBlock2D"
-
- up_block_types.append(up_block_type)
-
- curr_res = [r * 2 for r in curr_res]
-
- return up_block_types
-
-
-def coerce_attn_resolutions(attn_resolutions):
- attn_resolutions = OmegaConf.to_object(attn_resolutions)
- attn_resolutions_ = []
- for ar in attn_resolutions:
- if isinstance(ar, (list, tuple)):
- attn_resolutions_.append(list(ar))
- else:
- attn_resolutions_.append([ar, ar])
- return attn_resolutions_
-
-
-def coerce_resolution(resolution):
- resolution = OmegaConf.to_object(resolution)
- if isinstance(resolution, int):
- resolution = [resolution, resolution] # H, W
- elif isinstance(resolution, (tuple, list)):
- resolution = list(resolution)
- else:
- raise ValueError("Unknown type of resolution:", resolution)
- return resolution
-
-
-# done vqvae model
-
-# vqvae checkpoint
-
-
-def vqvae_original_checkpoint_to_diffusers_checkpoint(model, checkpoint):
- diffusers_checkpoint = {}
-
- diffusers_checkpoint.update(vqvae_encoder_to_diffusers_checkpoint(model, checkpoint))
-
- # quant_conv
-
- diffusers_checkpoint.update(
- {
- "quant_conv.weight": checkpoint["quant_conv.weight"],
- "quant_conv.bias": checkpoint["quant_conv.bias"],
- }
- )
-
- # quantize
- diffusers_checkpoint.update({"quantize.embedding.weight": checkpoint["quantize.embedding"]})
-
- # post_quant_conv
- diffusers_checkpoint.update(
- {
- "post_quant_conv.weight": checkpoint["post_quant_conv.weight"],
- "post_quant_conv.bias": checkpoint["post_quant_conv.bias"],
- }
- )
-
- # decoder
- diffusers_checkpoint.update(vqvae_decoder_to_diffusers_checkpoint(model, checkpoint))
-
- return diffusers_checkpoint
-
-
-def vqvae_encoder_to_diffusers_checkpoint(model, checkpoint):
- diffusers_checkpoint = {}
-
- # conv_in
- diffusers_checkpoint.update(
- {
- "encoder.conv_in.weight": checkpoint["encoder.conv_in.weight"],
- "encoder.conv_in.bias": checkpoint["encoder.conv_in.bias"],
- }
- )
-
- # down_blocks
- for down_block_idx, down_block in enumerate(model.encoder.down_blocks):
- diffusers_down_block_prefix = f"encoder.down_blocks.{down_block_idx}"
- down_block_prefix = f"encoder.down.{down_block_idx}"
-
- # resnets
- for resnet_idx, resnet in enumerate(down_block.resnets):
- diffusers_resnet_prefix = f"{diffusers_down_block_prefix}.resnets.{resnet_idx}"
- resnet_prefix = f"{down_block_prefix}.block.{resnet_idx}"
-
- diffusers_checkpoint.update(
- vqvae_resnet_to_diffusers_checkpoint(
- resnet, checkpoint, diffusers_resnet_prefix=diffusers_resnet_prefix, resnet_prefix=resnet_prefix
- )
- )
-
- # downsample
-
- # do not include the downsample when on the last down block
- # There is no downsample on the last down block
- if down_block_idx != len(model.encoder.down_blocks) - 1:
- # There's a single downsample in the original checkpoint but a list of downsamples
- # in the diffusers model.
- diffusers_downsample_prefix = f"{diffusers_down_block_prefix}.downsamplers.0.conv"
- downsample_prefix = f"{down_block_prefix}.downsample.conv"
- diffusers_checkpoint.update(
- {
- f"{diffusers_downsample_prefix}.weight": checkpoint[f"{downsample_prefix}.weight"],
- f"{diffusers_downsample_prefix}.bias": checkpoint[f"{downsample_prefix}.bias"],
- }
- )
-
- # attentions
-
- if hasattr(down_block, "attentions"):
- for attention_idx, _ in enumerate(down_block.attentions):
- diffusers_attention_prefix = f"{diffusers_down_block_prefix}.attentions.{attention_idx}"
- attention_prefix = f"{down_block_prefix}.attn.{attention_idx}"
- diffusers_checkpoint.update(
- vqvae_attention_to_diffusers_checkpoint(
- checkpoint,
- diffusers_attention_prefix=diffusers_attention_prefix,
- attention_prefix=attention_prefix,
- )
- )
-
- # mid block
-
- # mid block attentions
-
- # There is a single hardcoded attention block in the middle of the VQ-diffusion encoder
- diffusers_attention_prefix = "encoder.mid_block.attentions.0"
- attention_prefix = "encoder.mid.attn_1"
- diffusers_checkpoint.update(
- vqvae_attention_to_diffusers_checkpoint(
- checkpoint, diffusers_attention_prefix=diffusers_attention_prefix, attention_prefix=attention_prefix
- )
- )
-
- # mid block resnets
-
- for diffusers_resnet_idx, resnet in enumerate(model.encoder.mid_block.resnets):
- diffusers_resnet_prefix = f"encoder.mid_block.resnets.{diffusers_resnet_idx}"
-
- # the hardcoded prefixes to `block_` are 1 and 2
- orig_resnet_idx = diffusers_resnet_idx + 1
- # There are two hardcoded resnets in the middle of the VQ-diffusion encoder
- resnet_prefix = f"encoder.mid.block_{orig_resnet_idx}"
-
- diffusers_checkpoint.update(
- vqvae_resnet_to_diffusers_checkpoint(
- resnet, checkpoint, diffusers_resnet_prefix=diffusers_resnet_prefix, resnet_prefix=resnet_prefix
- )
- )
-
- diffusers_checkpoint.update(
- {
- # conv_norm_out
- "encoder.conv_norm_out.weight": checkpoint["encoder.norm_out.weight"],
- "encoder.conv_norm_out.bias": checkpoint["encoder.norm_out.bias"],
- # conv_out
- "encoder.conv_out.weight": checkpoint["encoder.conv_out.weight"],
- "encoder.conv_out.bias": checkpoint["encoder.conv_out.bias"],
- }
- )
-
- return diffusers_checkpoint
-
-
-def vqvae_decoder_to_diffusers_checkpoint(model, checkpoint):
- diffusers_checkpoint = {}
-
- # conv in
- diffusers_checkpoint.update(
- {
- "decoder.conv_in.weight": checkpoint["decoder.conv_in.weight"],
- "decoder.conv_in.bias": checkpoint["decoder.conv_in.bias"],
- }
- )
-
- # up_blocks
-
- for diffusers_up_block_idx, up_block in enumerate(model.decoder.up_blocks):
- # up_blocks are stored in reverse order in the VQ-diffusion checkpoint
- orig_up_block_idx = len(model.decoder.up_blocks) - 1 - diffusers_up_block_idx
-
- diffusers_up_block_prefix = f"decoder.up_blocks.{diffusers_up_block_idx}"
- up_block_prefix = f"decoder.up.{orig_up_block_idx}"
-
- # resnets
- for resnet_idx, resnet in enumerate(up_block.resnets):
- diffusers_resnet_prefix = f"{diffusers_up_block_prefix}.resnets.{resnet_idx}"
- resnet_prefix = f"{up_block_prefix}.block.{resnet_idx}"
-
- diffusers_checkpoint.update(
- vqvae_resnet_to_diffusers_checkpoint(
- resnet, checkpoint, diffusers_resnet_prefix=diffusers_resnet_prefix, resnet_prefix=resnet_prefix
- )
- )
-
- # upsample
-
- # there is no up sample on the last up block
- if diffusers_up_block_idx != len(model.decoder.up_blocks) - 1:
- # There's a single upsample in the VQ-diffusion checkpoint but a list of downsamples
- # in the diffusers model.
- diffusers_downsample_prefix = f"{diffusers_up_block_prefix}.upsamplers.0.conv"
- downsample_prefix = f"{up_block_prefix}.upsample.conv"
- diffusers_checkpoint.update(
- {
- f"{diffusers_downsample_prefix}.weight": checkpoint[f"{downsample_prefix}.weight"],
- f"{diffusers_downsample_prefix}.bias": checkpoint[f"{downsample_prefix}.bias"],
- }
- )
-
- # attentions
-
- if hasattr(up_block, "attentions"):
- for attention_idx, _ in enumerate(up_block.attentions):
- diffusers_attention_prefix = f"{diffusers_up_block_prefix}.attentions.{attention_idx}"
- attention_prefix = f"{up_block_prefix}.attn.{attention_idx}"
- diffusers_checkpoint.update(
- vqvae_attention_to_diffusers_checkpoint(
- checkpoint,
- diffusers_attention_prefix=diffusers_attention_prefix,
- attention_prefix=attention_prefix,
- )
- )
-
- # mid block
-
- # mid block attentions
-
- # There is a single hardcoded attention block in the middle of the VQ-diffusion decoder
- diffusers_attention_prefix = "decoder.mid_block.attentions.0"
- attention_prefix = "decoder.mid.attn_1"
- diffusers_checkpoint.update(
- vqvae_attention_to_diffusers_checkpoint(
- checkpoint, diffusers_attention_prefix=diffusers_attention_prefix, attention_prefix=attention_prefix
- )
- )
-
- # mid block resnets
-
- for diffusers_resnet_idx, resnet in enumerate(model.encoder.mid_block.resnets):
- diffusers_resnet_prefix = f"decoder.mid_block.resnets.{diffusers_resnet_idx}"
-
- # the hardcoded prefixes to `block_` are 1 and 2
- orig_resnet_idx = diffusers_resnet_idx + 1
- # There are two hardcoded resnets in the middle of the VQ-diffusion decoder
- resnet_prefix = f"decoder.mid.block_{orig_resnet_idx}"
-
- diffusers_checkpoint.update(
- vqvae_resnet_to_diffusers_checkpoint(
- resnet, checkpoint, diffusers_resnet_prefix=diffusers_resnet_prefix, resnet_prefix=resnet_prefix
- )
- )
-
- diffusers_checkpoint.update(
- {
- # conv_norm_out
- "decoder.conv_norm_out.weight": checkpoint["decoder.norm_out.weight"],
- "decoder.conv_norm_out.bias": checkpoint["decoder.norm_out.bias"],
- # conv_out
- "decoder.conv_out.weight": checkpoint["decoder.conv_out.weight"],
- "decoder.conv_out.bias": checkpoint["decoder.conv_out.bias"],
- }
- )
-
- return diffusers_checkpoint
-
-
-def vqvae_resnet_to_diffusers_checkpoint(resnet, checkpoint, *, diffusers_resnet_prefix, resnet_prefix):
- rv = {
- # norm1
- f"{diffusers_resnet_prefix}.norm1.weight": checkpoint[f"{resnet_prefix}.norm1.weight"],
- f"{diffusers_resnet_prefix}.norm1.bias": checkpoint[f"{resnet_prefix}.norm1.bias"],
- # conv1
- f"{diffusers_resnet_prefix}.conv1.weight": checkpoint[f"{resnet_prefix}.conv1.weight"],
- f"{diffusers_resnet_prefix}.conv1.bias": checkpoint[f"{resnet_prefix}.conv1.bias"],
- # norm2
- f"{diffusers_resnet_prefix}.norm2.weight": checkpoint[f"{resnet_prefix}.norm2.weight"],
- f"{diffusers_resnet_prefix}.norm2.bias": checkpoint[f"{resnet_prefix}.norm2.bias"],
- # conv2
- f"{diffusers_resnet_prefix}.conv2.weight": checkpoint[f"{resnet_prefix}.conv2.weight"],
- f"{diffusers_resnet_prefix}.conv2.bias": checkpoint[f"{resnet_prefix}.conv2.bias"],
- }
-
- if resnet.conv_shortcut is not None:
- rv.update(
- {
- f"{diffusers_resnet_prefix}.conv_shortcut.weight": checkpoint[f"{resnet_prefix}.nin_shortcut.weight"],
- f"{diffusers_resnet_prefix}.conv_shortcut.bias": checkpoint[f"{resnet_prefix}.nin_shortcut.bias"],
- }
- )
-
- return rv
-
-
-def vqvae_attention_to_diffusers_checkpoint(checkpoint, *, diffusers_attention_prefix, attention_prefix):
- return {
- # group_norm
- f"{diffusers_attention_prefix}.group_norm.weight": checkpoint[f"{attention_prefix}.norm.weight"],
- f"{diffusers_attention_prefix}.group_norm.bias": checkpoint[f"{attention_prefix}.norm.bias"],
- # query
- f"{diffusers_attention_prefix}.query.weight": checkpoint[f"{attention_prefix}.q.weight"][:, :, 0, 0],
- f"{diffusers_attention_prefix}.query.bias": checkpoint[f"{attention_prefix}.q.bias"],
- # key
- f"{diffusers_attention_prefix}.key.weight": checkpoint[f"{attention_prefix}.k.weight"][:, :, 0, 0],
- f"{diffusers_attention_prefix}.key.bias": checkpoint[f"{attention_prefix}.k.bias"],
- # value
- f"{diffusers_attention_prefix}.value.weight": checkpoint[f"{attention_prefix}.v.weight"][:, :, 0, 0],
- f"{diffusers_attention_prefix}.value.bias": checkpoint[f"{attention_prefix}.v.bias"],
- # proj_attn
- f"{diffusers_attention_prefix}.proj_attn.weight": checkpoint[f"{attention_prefix}.proj_out.weight"][
- :, :, 0, 0
- ],
- f"{diffusers_attention_prefix}.proj_attn.bias": checkpoint[f"{attention_prefix}.proj_out.bias"],
- }
-
-
-# done vqvae checkpoint
-
-# transformer model
-
-PORTED_DIFFUSIONS = ["image_synthesis.modeling.transformers.diffusion_transformer.DiffusionTransformer"]
-PORTED_TRANSFORMERS = ["image_synthesis.modeling.transformers.transformer_utils.Text2ImageTransformer"]
-PORTED_CONTENT_EMBEDDINGS = ["image_synthesis.modeling.embeddings.dalle_mask_image_embedding.DalleMaskImageEmbedding"]
-
-
-def transformer_model_from_original_config(
- original_diffusion_config, original_transformer_config, original_content_embedding_config
-):
- assert (
- original_diffusion_config.target in PORTED_DIFFUSIONS
- ), f"{original_diffusion_config.target} has not yet been ported to diffusers."
- assert (
- original_transformer_config.target in PORTED_TRANSFORMERS
- ), f"{original_transformer_config.target} has not yet been ported to diffusers."
- assert (
- original_content_embedding_config.target in PORTED_CONTENT_EMBEDDINGS
- ), f"{original_content_embedding_config.target} has not yet been ported to diffusers."
-
- original_diffusion_config = original_diffusion_config.params
- original_transformer_config = original_transformer_config.params
- original_content_embedding_config = original_content_embedding_config.params
-
- inner_dim = original_transformer_config["n_embd"]
-
- n_heads = original_transformer_config["n_head"]
-
- # VQ-Diffusion gives dimension of the multi-headed attention layers as the
- # number of attention heads times the sequence length (the dimension) of a
- # single head. We want to specify our attention blocks with those values
- # specified separately
- assert inner_dim % n_heads == 0
- d_head = inner_dim // n_heads
-
- depth = original_transformer_config["n_layer"]
- context_dim = original_transformer_config["condition_dim"]
-
- num_embed = original_content_embedding_config["num_embed"]
- # the number of embeddings in the transformer includes the mask embedding.
- # the content embedding (the vqvae) does not include the mask embedding.
- num_embed = num_embed + 1
-
- height = original_transformer_config["content_spatial_size"][0]
- width = original_transformer_config["content_spatial_size"][1]
-
- assert width == height, "width has to be equal to height"
- dropout = original_transformer_config["resid_pdrop"]
- num_embeds_ada_norm = original_diffusion_config["diffusion_step"]
-
- model_kwargs = {
- "attention_bias": True,
- "cross_attention_dim": context_dim,
- "attention_head_dim": d_head,
- "num_layers": depth,
- "dropout": dropout,
- "num_attention_heads": n_heads,
- "num_vector_embeds": num_embed,
- "num_embeds_ada_norm": num_embeds_ada_norm,
- "norm_num_groups": 32,
- "sample_size": width,
- "activation_fn": "geglu-approximate",
- }
-
- model = Transformer2DModel(**model_kwargs)
- return model
-
-
-# done transformer model
-
-# transformer checkpoint
-
-
-def transformer_original_checkpoint_to_diffusers_checkpoint(model, checkpoint):
- diffusers_checkpoint = {}
-
- transformer_prefix = "transformer.transformer"
-
- diffusers_latent_image_embedding_prefix = "latent_image_embedding"
- latent_image_embedding_prefix = f"{transformer_prefix}.content_emb"
-
- # DalleMaskImageEmbedding
- diffusers_checkpoint.update(
- {
- f"{diffusers_latent_image_embedding_prefix}.emb.weight": checkpoint[
- f"{latent_image_embedding_prefix}.emb.weight"
- ],
- f"{diffusers_latent_image_embedding_prefix}.height_emb.weight": checkpoint[
- f"{latent_image_embedding_prefix}.height_emb.weight"
- ],
- f"{diffusers_latent_image_embedding_prefix}.width_emb.weight": checkpoint[
- f"{latent_image_embedding_prefix}.width_emb.weight"
- ],
- }
- )
-
- # transformer blocks
- for transformer_block_idx, transformer_block in enumerate(model.transformer_blocks):
- diffusers_transformer_block_prefix = f"transformer_blocks.{transformer_block_idx}"
- transformer_block_prefix = f"{transformer_prefix}.blocks.{transformer_block_idx}"
-
- # ada norm block
- diffusers_ada_norm_prefix = f"{diffusers_transformer_block_prefix}.norm1"
- ada_norm_prefix = f"{transformer_block_prefix}.ln1"
-
- diffusers_checkpoint.update(
- transformer_ada_norm_to_diffusers_checkpoint(
- checkpoint, diffusers_ada_norm_prefix=diffusers_ada_norm_prefix, ada_norm_prefix=ada_norm_prefix
- )
- )
-
- # attention block
- diffusers_attention_prefix = f"{diffusers_transformer_block_prefix}.attn1"
- attention_prefix = f"{transformer_block_prefix}.attn1"
-
- diffusers_checkpoint.update(
- transformer_attention_to_diffusers_checkpoint(
- checkpoint, diffusers_attention_prefix=diffusers_attention_prefix, attention_prefix=attention_prefix
- )
- )
-
- # ada norm block
- diffusers_ada_norm_prefix = f"{diffusers_transformer_block_prefix}.norm2"
- ada_norm_prefix = f"{transformer_block_prefix}.ln1_1"
-
- diffusers_checkpoint.update(
- transformer_ada_norm_to_diffusers_checkpoint(
- checkpoint, diffusers_ada_norm_prefix=diffusers_ada_norm_prefix, ada_norm_prefix=ada_norm_prefix
- )
- )
-
- # attention block
- diffusers_attention_prefix = f"{diffusers_transformer_block_prefix}.attn2"
- attention_prefix = f"{transformer_block_prefix}.attn2"
-
- diffusers_checkpoint.update(
- transformer_attention_to_diffusers_checkpoint(
- checkpoint, diffusers_attention_prefix=diffusers_attention_prefix, attention_prefix=attention_prefix
- )
- )
-
- # norm block
- diffusers_norm_block_prefix = f"{diffusers_transformer_block_prefix}.norm3"
- norm_block_prefix = f"{transformer_block_prefix}.ln2"
-
- diffusers_checkpoint.update(
- {
- f"{diffusers_norm_block_prefix}.weight": checkpoint[f"{norm_block_prefix}.weight"],
- f"{diffusers_norm_block_prefix}.bias": checkpoint[f"{norm_block_prefix}.bias"],
- }
- )
-
- # feedforward block
- diffusers_feedforward_prefix = f"{diffusers_transformer_block_prefix}.ff"
- feedforward_prefix = f"{transformer_block_prefix}.mlp"
-
- diffusers_checkpoint.update(
- transformer_feedforward_to_diffusers_checkpoint(
- checkpoint,
- diffusers_feedforward_prefix=diffusers_feedforward_prefix,
- feedforward_prefix=feedforward_prefix,
- )
- )
-
- # to logits
-
- diffusers_norm_out_prefix = "norm_out"
- norm_out_prefix = f"{transformer_prefix}.to_logits.0"
-
- diffusers_checkpoint.update(
- {
- f"{diffusers_norm_out_prefix}.weight": checkpoint[f"{norm_out_prefix}.weight"],
- f"{diffusers_norm_out_prefix}.bias": checkpoint[f"{norm_out_prefix}.bias"],
- }
- )
-
- diffusers_out_prefix = "out"
- out_prefix = f"{transformer_prefix}.to_logits.1"
-
- diffusers_checkpoint.update(
- {
- f"{diffusers_out_prefix}.weight": checkpoint[f"{out_prefix}.weight"],
- f"{diffusers_out_prefix}.bias": checkpoint[f"{out_prefix}.bias"],
- }
- )
-
- return diffusers_checkpoint
-
-
-def transformer_ada_norm_to_diffusers_checkpoint(checkpoint, *, diffusers_ada_norm_prefix, ada_norm_prefix):
- return {
- f"{diffusers_ada_norm_prefix}.emb.weight": checkpoint[f"{ada_norm_prefix}.emb.weight"],
- f"{diffusers_ada_norm_prefix}.linear.weight": checkpoint[f"{ada_norm_prefix}.linear.weight"],
- f"{diffusers_ada_norm_prefix}.linear.bias": checkpoint[f"{ada_norm_prefix}.linear.bias"],
- }
-
-
-def transformer_attention_to_diffusers_checkpoint(checkpoint, *, diffusers_attention_prefix, attention_prefix):
- return {
- # key
- f"{diffusers_attention_prefix}.to_k.weight": checkpoint[f"{attention_prefix}.key.weight"],
- f"{diffusers_attention_prefix}.to_k.bias": checkpoint[f"{attention_prefix}.key.bias"],
- # query
- f"{diffusers_attention_prefix}.to_q.weight": checkpoint[f"{attention_prefix}.query.weight"],
- f"{diffusers_attention_prefix}.to_q.bias": checkpoint[f"{attention_prefix}.query.bias"],
- # value
- f"{diffusers_attention_prefix}.to_v.weight": checkpoint[f"{attention_prefix}.value.weight"],
- f"{diffusers_attention_prefix}.to_v.bias": checkpoint[f"{attention_prefix}.value.bias"],
- # linear out
- f"{diffusers_attention_prefix}.to_out.0.weight": checkpoint[f"{attention_prefix}.proj.weight"],
- f"{diffusers_attention_prefix}.to_out.0.bias": checkpoint[f"{attention_prefix}.proj.bias"],
- }
-
-
-def transformer_feedforward_to_diffusers_checkpoint(checkpoint, *, diffusers_feedforward_prefix, feedforward_prefix):
- return {
- f"{diffusers_feedforward_prefix}.net.0.proj.weight": checkpoint[f"{feedforward_prefix}.0.weight"],
- f"{diffusers_feedforward_prefix}.net.0.proj.bias": checkpoint[f"{feedforward_prefix}.0.bias"],
- f"{diffusers_feedforward_prefix}.net.2.weight": checkpoint[f"{feedforward_prefix}.2.weight"],
- f"{diffusers_feedforward_prefix}.net.2.bias": checkpoint[f"{feedforward_prefix}.2.bias"],
- }
-
-
-# done transformer checkpoint
-
-
-def read_config_file(filename):
- # The yaml file contains annotations that certain values should
- # loaded as tuples. By default, OmegaConf will panic when reading
- # these. Instead, we can manually read the yaml with the FullLoader and then
- # construct the OmegaConf object.
- with open(filename) as f:
- original_config = yaml.load(f, FullLoader)
-
- return OmegaConf.create(original_config)
-
-
-# We take separate arguments for the vqvae because the ITHQ vqvae config file
-# is separate from the config file for the rest of the model.
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
-
- parser.add_argument(
- "--vqvae_checkpoint_path",
- default=None,
- type=str,
- required=True,
- help="Path to the vqvae checkpoint to convert.",
- )
-
- parser.add_argument(
- "--vqvae_original_config_file",
- default=None,
- type=str,
- required=True,
- help="The YAML config file corresponding to the original architecture for the vqvae.",
- )
-
- parser.add_argument(
- "--checkpoint_path", default=None, type=str, required=True, help="Path to the checkpoint to convert."
- )
-
- parser.add_argument(
- "--original_config_file",
- default=None,
- type=str,
- required=True,
- help="The YAML config file corresponding to the original architecture.",
- )
-
- parser.add_argument("--dump_path", default=None, type=str, required=True, help="Path to the output model.")
-
- parser.add_argument(
- "--checkpoint_load_device",
- default="cpu",
- type=str,
- required=False,
- help="The device passed to `map_location` when loading checkpoints.",
- )
-
- # See link for how ema weights are always selected
- # https://github.com/microsoft/VQ-Diffusion/blob/3c98e77f721db7c787b76304fa2c96a36c7b00af/inference_VQ_Diffusion.py#L65
- parser.add_argument(
- "--no_use_ema",
- action="store_true",
- required=False,
- help=(
- "Set to not use the ema weights from the original VQ-Diffusion checkpoint. You probably do not want to set"
- " it as the original VQ-Diffusion always uses the ema weights when loading models."
- ),
- )
-
- args = parser.parse_args()
-
- use_ema = not args.no_use_ema
-
- print(f"loading checkpoints to {args.checkpoint_load_device}")
-
- checkpoint_map_location = torch.device(args.checkpoint_load_device)
-
- # vqvae_model
-
- print(f"loading vqvae, config: {args.vqvae_original_config_file}, checkpoint: {args.vqvae_checkpoint_path}")
-
- vqvae_original_config = read_config_file(args.vqvae_original_config_file).model
- vqvae_checkpoint = torch.load(args.vqvae_checkpoint_path, map_location=checkpoint_map_location)["model"]
-
- with init_empty_weights():
- vqvae_model = vqvae_model_from_original_config(vqvae_original_config)
-
- vqvae_diffusers_checkpoint = vqvae_original_checkpoint_to_diffusers_checkpoint(vqvae_model, vqvae_checkpoint)
-
- with tempfile.NamedTemporaryFile() as vqvae_diffusers_checkpoint_file:
- torch.save(vqvae_diffusers_checkpoint, vqvae_diffusers_checkpoint_file.name)
- del vqvae_diffusers_checkpoint
- del vqvae_checkpoint
- load_checkpoint_and_dispatch(vqvae_model, vqvae_diffusers_checkpoint_file.name, device_map="auto")
-
- print("done loading vqvae")
-
- # done vqvae_model
-
- # transformer_model
-
- print(
- f"loading transformer, config: {args.original_config_file}, checkpoint: {args.checkpoint_path}, use ema:"
- f" {use_ema}"
- )
-
- original_config = read_config_file(args.original_config_file).model
-
- diffusion_config = original_config.params.diffusion_config
- transformer_config = original_config.params.diffusion_config.params.transformer_config
- content_embedding_config = original_config.params.diffusion_config.params.content_emb_config
-
- pre_checkpoint = torch.load(args.checkpoint_path, map_location=checkpoint_map_location)
-
- if use_ema:
- if "ema" in pre_checkpoint:
- checkpoint = {}
- for k, v in pre_checkpoint["model"].items():
- checkpoint[k] = v
-
- for k, v in pre_checkpoint["ema"].items():
- # The ema weights are only used on the transformer. To mimic their key as if they came
- # from the state_dict for the top level model, we prefix with an additional "transformer."
- # See the source linked in the args.use_ema config for more information.
- checkpoint[f"transformer.{k}"] = v
- else:
- print("attempted to load ema weights but no ema weights are specified in the loaded checkpoint.")
- checkpoint = pre_checkpoint["model"]
- else:
- checkpoint = pre_checkpoint["model"]
-
- del pre_checkpoint
-
- with init_empty_weights():
- transformer_model = transformer_model_from_original_config(
- diffusion_config, transformer_config, content_embedding_config
- )
-
- diffusers_transformer_checkpoint = transformer_original_checkpoint_to_diffusers_checkpoint(
- transformer_model, checkpoint
- )
-
- # classifier free sampling embeddings interlude
-
- # The learned embeddings are stored on the transformer in the original VQ-diffusion. We store them on a separate
- # model, so we pull them off the checkpoint before the checkpoint is deleted.
-
- learnable_classifier_free_sampling_embeddings = diffusion_config.params.learnable_cf
-
- if learnable_classifier_free_sampling_embeddings:
- learned_classifier_free_sampling_embeddings_embeddings = checkpoint["transformer.empty_text_embed"]
- else:
- learned_classifier_free_sampling_embeddings_embeddings = None
-
- # done classifier free sampling embeddings interlude
-
- with tempfile.NamedTemporaryFile() as diffusers_transformer_checkpoint_file:
- torch.save(diffusers_transformer_checkpoint, diffusers_transformer_checkpoint_file.name)
- del diffusers_transformer_checkpoint
- del checkpoint
- load_checkpoint_and_dispatch(transformer_model, diffusers_transformer_checkpoint_file.name, device_map="auto")
-
- print("done loading transformer")
-
- # done transformer_model
-
- # text encoder
-
- print("loading CLIP text encoder")
-
- clip_name = "openai/clip-vit-base-patch32"
-
- # The original VQ-Diffusion specifies the pad value by the int used in the
- # returned tokens. Each model uses `0` as the pad value. The transformers clip api
- # specifies the pad value via the token before it has been tokenized. The `!` pad
- # token is the same as padding with the `0` pad value.
- pad_token = "!"
-
- tokenizer_model = CLIPTokenizer.from_pretrained(clip_name, pad_token=pad_token, device_map="auto")
-
- assert tokenizer_model.convert_tokens_to_ids(pad_token) == 0
-
- text_encoder_model = CLIPTextModel.from_pretrained(
- clip_name,
- # `CLIPTextModel` does not support device_map="auto"
- # device_map="auto"
- )
-
- print("done loading CLIP text encoder")
-
- # done text encoder
-
- # scheduler
-
- scheduler_model = VQDiffusionScheduler(
- # the scheduler has the same number of embeddings as the transformer
- num_vec_classes=transformer_model.num_vector_embeds
- )
-
- # done scheduler
-
- # learned classifier free sampling embeddings
-
- with init_empty_weights():
- learned_classifier_free_sampling_embeddings_model = LearnedClassifierFreeSamplingEmbeddings(
- learnable_classifier_free_sampling_embeddings,
- hidden_size=text_encoder_model.config.hidden_size,
- length=tokenizer_model.model_max_length,
- )
-
- learned_classifier_free_sampling_checkpoint = {
- "embeddings": learned_classifier_free_sampling_embeddings_embeddings.float()
- }
-
- with tempfile.NamedTemporaryFile() as learned_classifier_free_sampling_checkpoint_file:
- torch.save(learned_classifier_free_sampling_checkpoint, learned_classifier_free_sampling_checkpoint_file.name)
- del learned_classifier_free_sampling_checkpoint
- del learned_classifier_free_sampling_embeddings_embeddings
- load_checkpoint_and_dispatch(
- learned_classifier_free_sampling_embeddings_model,
- learned_classifier_free_sampling_checkpoint_file.name,
- device_map="auto",
- )
-
- # done learned classifier free sampling embeddings
-
- print(f"saving VQ diffusion model, path: {args.dump_path}")
-
- pipe = VQDiffusionPipeline(
- vqvae=vqvae_model,
- transformer=transformer_model,
- tokenizer=tokenizer_model,
- text_encoder=text_encoder_model,
- learned_classifier_free_sampling_embeddings=learned_classifier_free_sampling_embeddings_model,
- scheduler=scheduler_model,
- )
- pipe.save_pretrained(args.dump_path)
-
- print("done writing VQ diffusion model")
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/cornernet/README.md b/spaces/Andy1621/uniformer_image_detection/configs/cornernet/README.md
deleted file mode 100644
index 51e5e7a5b815e6c08ea4f9fa46800b18eebf42c3..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/cornernet/README.md
+++ /dev/null
@@ -1,33 +0,0 @@
-# CornerNet
-
-## Introduction
-
-[ALGORITHM]
-
-```latex
-@inproceedings{law2018cornernet,
- title={Cornernet: Detecting objects as paired keypoints},
- author={Law, Hei and Deng, Jia},
- booktitle={15th European Conference on Computer Vision, ECCV 2018},
- pages={765--781},
- year={2018},
- organization={Springer Verlag}
-}
-```
-
-## Results and models
-
-| Backbone | Batch Size | Step/Total Epochs | Mem (GB) | Inf time (fps) | box AP | Config | Download |
-| :-------------: | :--------: |:----------------: | :------: | :------------: | :----: | :------: | :--------: |
-| HourglassNet-104 | [10 x 5](./cornernet_hourglass104_mstest_10x5_210e_coco.py) | 180/210 | 13.9 | 4.2 | 41.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/cornernet/cornernet_hourglass104_mstest_10x5_210e_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_10x5_210e_coco/cornernet_hourglass104_mstest_10x5_210e_coco_20200824_185720-5fefbf1c.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_10x5_210e_coco/cornernet_hourglass104_mstest_10x5_210e_coco_20200824_185720.log.json) |
-| HourglassNet-104 | [8 x 6](./cornernet_hourglass104_mstest_8x6_210e_coco.py) | 180/210 | 15.9 | 4.2 | 41.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/cornernet/cornernet_hourglass104_mstest_8x6_210e_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_8x6_210e_coco/cornernet_hourglass104_mstest_8x6_210e_coco_20200825_150618-79b44c30.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_8x6_210e_coco/cornernet_hourglass104_mstest_8x6_210e_coco_20200825_150618.log.json) |
-| HourglassNet-104 | [32 x 3](./cornernet_hourglass104_mstest_32x3_210e_coco.py) | 180/210 | 9.5 | 3.9 | 40.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/cornernet/cornernet_hourglass104_mstest_32x3_210e_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_32x3_210e_coco/cornernet_hourglass104_mstest_32x3_210e_coco_20200819_203110-1efaea91.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_32x3_210e_coco/cornernet_hourglass104_mstest_32x3_210e_coco_20200819_203110.log.json) |
-
-Note:
-
-- TTA setting is single-scale and `flip=True`.
-- Experiments with `images_per_gpu=6` are conducted on Tesla V100-SXM2-32GB, `images_per_gpu=3` are conducted on GeForce GTX 1080 Ti.
-- Here are the descriptions of each experiment setting:
- - 10 x 5: 10 GPUs with 5 images per gpu. This is the same setting as that reported in the original paper.
- - 8 x 6: 8 GPUs with 6 images per gpu. The total batchsize is similar to paper and only need 1 node to train.
- - 32 x 3: 32 GPUs with 3 images per gpu. The default setting for 1080TI and need 4 nodes to train.
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/tutorial_dataset.py b/spaces/Anonymous-sub/Rerender/ControlNet/tutorial_dataset.py
deleted file mode 100644
index fb327f981d10cf94e6a7f55f5b2b4497d3e7a9cb..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/tutorial_dataset.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import json
-import cv2
-import numpy as np
-
-from torch.utils.data import Dataset
-
-
-class MyDataset(Dataset):
- def __init__(self):
- self.data = []
- with open('./training/fill50k/prompt.json', 'rt') as f:
- for line in f:
- self.data.append(json.loads(line))
-
- def __len__(self):
- return len(self.data)
-
- def __getitem__(self, idx):
- item = self.data[idx]
-
- source_filename = item['source']
- target_filename = item['target']
- prompt = item['prompt']
-
- source = cv2.imread('./training/fill50k/' + source_filename)
- target = cv2.imread('./training/fill50k/' + target_filename)
-
- # Do not forget that OpenCV read images in BGR order.
- source = cv2.cvtColor(source, cv2.COLOR_BGR2RGB)
- target = cv2.cvtColor(target, cv2.COLOR_BGR2RGB)
-
- # Normalize source images to [0, 1].
- source = source.astype(np.float32) / 255.0
-
- # Normalize target images to [-1, 1].
- target = (target.astype(np.float32) / 127.5) - 1.0
-
- return dict(jpg=target, txt=prompt, hint=source)
-
diff --git a/spaces/Apex-X/nono/roop/capturer.py b/spaces/Apex-X/nono/roop/capturer.py
deleted file mode 100644
index 515fc8e54a9a3709ceee4c340f33e0b907416073..0000000000000000000000000000000000000000
--- a/spaces/Apex-X/nono/roop/capturer.py
+++ /dev/null
@@ -1,22 +0,0 @@
-from typing import Optional
-import cv2
-
-from roop.typing import Frame
-
-
-def get_video_frame(video_path: str, frame_number: int = 0) -> Optional[Frame]:
- capture = cv2.VideoCapture(video_path)
- frame_total = capture.get(cv2.CAP_PROP_FRAME_COUNT)
- capture.set(cv2.CAP_PROP_POS_FRAMES, min(frame_total, frame_number - 1))
- has_frame, frame = capture.read()
- capture.release()
- if has_frame:
- return frame
- return None
-
-
-def get_video_frame_total(video_path: str) -> int:
- capture = cv2.VideoCapture(video_path)
- video_frame_total = int(capture.get(cv2.CAP_PROP_FRAME_COUNT))
- capture.release()
- return video_frame_total
diff --git a/spaces/Ashish17/Ashish_Open_Chat_AI_17/README.md b/spaces/Ashish17/Ashish_Open_Chat_AI_17/README.md
deleted file mode 100644
index bd116b5a1b99491ef33d9f9fa30a230708825278..0000000000000000000000000000000000000000
--- a/spaces/Ashish17/Ashish_Open_Chat_AI_17/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Ashish Open Chat AI 17
-emoji: 📚
-colorFrom: red
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/__init__.py
deleted file mode 100644
index 028dcfa0fc4b3a07307989c40389b2042ceafc03..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/__init__.py
+++ /dev/null
@@ -1,25 +0,0 @@
-"""distutils.command
-
-Package containing implementation of all the standard Distutils
-commands."""
-
-__all__ = [ # noqa: F822
- 'build',
- 'build_py',
- 'build_ext',
- 'build_clib',
- 'build_scripts',
- 'clean',
- 'install',
- 'install_lib',
- 'install_headers',
- 'install_scripts',
- 'install_data',
- 'sdist',
- 'register',
- 'bdist',
- 'bdist_dumb',
- 'bdist_rpm',
- 'check',
- 'upload',
-]
diff --git a/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/__init__.py b/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/__init__.py
deleted file mode 100644
index e9f728f2f273be5d5fdbec6c6cc41d737176a8c0..0000000000000000000000000000000000000000
--- a/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/__init__.py
+++ /dev/null
@@ -1,25 +0,0 @@
-from .factory import (
- list_models,
- create_model,
- create_model_and_transforms,
- add_model_config,
-)
-from .loss import ClipLoss, gather_features, LPLoss, lp_gather_features, LPMetrics
-from .model import (
- CLAP,
- CLAPTextCfg,
- CLAPVisionCfg,
- CLAPAudioCfp,
- convert_weights_to_fp16,
- trace_model,
-)
-from .openai import load_openai_model, list_openai_models
-from .pretrained import (
- list_pretrained,
- list_pretrained_tag_models,
- list_pretrained_model_tags,
- get_pretrained_url,
- download_pretrained,
-)
-from .tokenizer import SimpleTokenizer, tokenize
-from .transform import image_transform
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/transforms/custom_augmentation_impl.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/transforms/custom_augmentation_impl.py
deleted file mode 100644
index 5a69e178a5ac67f69c2eeab667b9c0740a862eee..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/transforms/custom_augmentation_impl.py
+++ /dev/null
@@ -1,63 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-# Modified by Xingyi Zhou
-"""
-Implement many useful :class:`Augmentation`.
-"""
-import numpy as np
-import sys
-from fvcore.transforms.transform import (
- BlendTransform,
- CropTransform,
- HFlipTransform,
- NoOpTransform,
- Transform,
- VFlipTransform,
-)
-from PIL import Image
-
-from detectron2.data.transforms.augmentation import Augmentation
-from .custom_transform import EfficientDetResizeCropTransform
-
-__all__ = [
- "EfficientDetResizeCrop",
-]
-
-
-class EfficientDetResizeCrop(Augmentation):
- """
- Scale the shorter edge to the given size, with a limit of `max_size` on the longer edge.
- If `max_size` is reached, then downscale so that the longer edge does not exceed max_size.
- """
-
- def __init__(
- self, size, scale, interp=Image.BILINEAR
- ):
- """
- Args:
- """
- super().__init__()
- self.target_size = (size, size)
- self.scale = scale
- self.interp = interp
-
- def get_transform(self, img):
- # Select a random scale factor.
- scale_factor = np.random.uniform(*self.scale)
- scaled_target_height = scale_factor * self.target_size[0]
- scaled_target_width = scale_factor * self.target_size[1]
- # Recompute the accurate scale_factor using rounded scaled image size.
- width, height = img.shape[1], img.shape[0]
- img_scale_y = scaled_target_height / height
- img_scale_x = scaled_target_width / width
- img_scale = min(img_scale_y, img_scale_x)
-
- # Select non-zero random offset (x, y) if scaled image is larger than target size
- scaled_h = int(height * img_scale)
- scaled_w = int(width * img_scale)
- offset_y = scaled_h - self.target_size[0]
- offset_x = scaled_w - self.target_size[1]
- offset_y = int(max(0.0, float(offset_y)) * np.random.uniform(0, 1))
- offset_x = int(max(0.0, float(offset_x)) * np.random.uniform(0, 1))
- return EfficientDetResizeCropTransform(
- scaled_h, scaled_w, offset_y, offset_x, img_scale, self.target_size, self.interp)
diff --git a/spaces/AzinZ/vitscn/text/__init__.py b/spaces/AzinZ/vitscn/text/__init__.py
deleted file mode 100644
index 4ac41f9025755d8ffd74068af14c6cfc8e5a4173..0000000000000000000000000000000000000000
--- a/spaces/AzinZ/vitscn/text/__init__.py
+++ /dev/null
@@ -1,54 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-from text import cleaners
-from text.symbols import symbols
-
-
-# Mappings from symbol to numeric ID and vice versa:
-_symbol_to_id = {s: i for i, s in enumerate(symbols)}
-_id_to_symbol = {i: s for i, s in enumerate(symbols)}
-
-
-def text_to_sequence(text, cleaner_names):
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
- Args:
- text: string to convert to a sequence
- cleaner_names: names of the cleaner functions to run the text through
- Returns:
- List of integers corresponding to the symbols in the text
- '''
- sequence = []
-
- clean_text = _clean_text(text, cleaner_names)
- for symbol in clean_text:
- symbol_id = _symbol_to_id[symbol]
- sequence += [symbol_id]
- return sequence
-
-
-def cleaned_text_to_sequence(cleaned_text):
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
- Args:
- text: string to convert to a sequence
- Returns:
- List of integers corresponding to the symbols in the text
- '''
- sequence = [_symbol_to_id[symbol] for symbol in cleaned_text]
- return sequence
-
-
-def sequence_to_text(sequence):
- '''Converts a sequence of IDs back to a string'''
- result = ''
- for symbol_id in sequence:
- s = _id_to_symbol[symbol_id]
- result += s
- return result
-
-
-def _clean_text(text, cleaner_names):
- for name in cleaner_names:
- cleaner = getattr(cleaners, name)
- if not cleaner:
- raise Exception('Unknown cleaner: %s' % name)
- text = cleaner(text)
- return text
diff --git a/spaces/Banbri/zcvzcv/src/app/interface/maintenance/index.tsx b/spaces/Banbri/zcvzcv/src/app/interface/maintenance/index.tsx
deleted file mode 100644
index 459bc1c5b6cdde1641919751a5e202706970f4a9..0000000000000000000000000000000000000000
--- a/spaces/Banbri/zcvzcv/src/app/interface/maintenance/index.tsx
+++ /dev/null
@@ -1,20 +0,0 @@
-import { fonts } from "@/lib/fonts"
-import { cn } from "@/lib/utils"
-
-export function Maintenance() {
- return (
-
- )
-}
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Aparcamiento De Coches Multijugador Apk Skachat.md b/spaces/Benson/text-generation/Examples/Aparcamiento De Coches Multijugador Apk Skachat.md
deleted file mode 100644
index 5450cc08f3954ef3fc1b00c9f652b953afee0579..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Aparcamiento De Coches Multijugador Apk Skachat.md
+++ /dev/null
@@ -1,75 +0,0 @@
-
-
Aparcamiento de coches multijugador APK Skachat: Una guía para descargar y jugar el juego en su PC
-
Si usted está buscando un juego de simulación de estacionamiento de coches realista y divertido, es posible que desee probar Parking Multijugador. Este juego es desarrollado por olzhass y tiene más de 100 millones de descargas en Google Play Store. Pero ¿qué pasa si desea jugar en su PC en lugar de su dispositivo móvil? En este artículo, le mostraremos cómo descargar e instalar Aparcamiento de coches multijugador APK Skachat en su PC utilizando dos emuladores populares de Android: BlueStacks y NoxPlayer. También te daremos algunos consejos sobre cómo jugar el juego en tu PC y disfrutar de sus características.
-
¿Qué es el Aparcamiento Multijugador?
-
Car Parking Multiplayer es un juego de simulación que te permite experimentar la emoción de aparcar varios coches en diferentes escenarios. Puede elegir entre más de 100 coches con interiores reales y personalizarlos con afinación, vinilos y partes del cuerpo. También puede explorar un mundo abierto con estaciones de servicio y servicios de automóviles reales, competir contra otros jugadores en carreras multijugador, intercambiar coches con otros jugadores, chatear con amigos e incluso jugar roles como oficial de policía.
Algunas de las características de Aparcamiento multijugador son:
-
-
Modo multijugador de mundo abierto con caminar gratis, chat de voz, lista de amigos y modo policía.
-
82 desafíos de estacionamiento y conducción en la vida real con diferentes vehículos, como remolques, camionetas, camiones, autos deportivos y autos clásicos.
-
Gráficos de alta calidad y efectos de sonido con física realista y sistema de daños.
-
Personalización del coche con suspensión ajustable, ángulo de rueda, ajuste del motor, turbo, caja de cambios, escape y visual auto tungs.
-
Entornos altamente detallados con edificios con interior, ciclo día-noche, efectos meteorológicos y sistema de tráfico.
-
-
Requisitos y compatibilidad
-
-
Para jugar Car Parking Multijugador en su PC, es necesario tener un equipo con Windows o Mac con al menos 4 GB de RAM y 5 GB de espacio en disco libre. También es necesario descargar e instalar un emulador de Android como BlueStacks o NoxPlayer que puede ejecutar el juego sin problemas en su PC. Explicaremos cómo hacerlo en la siguiente sección.
-
Cómo descargar e instalar Aparcamiento de coches multijugador APK Skachat en su PC?
-
Aparcamiento de coches multijugador APK Skachat es una versión modificada del juego original que le permite descargar e instalar de forma gratuita sin restricciones. Sin embargo, ya que no está disponible en las tiendas de aplicaciones oficiales como Google Play Store o Apple App Store, debe usar una fuente de terceros para obtenerlo. Una de las fuentes más fiables es APKPure.com, donde se puede encontrar la última versión de Aparcamiento Multijugador APK Skachat junto con su información de archivos y comentarios de los usuarios.
-
Para descargar e instalar Aparcamiento de coches multijugador APK Skachat en su PC usando BlueStacks o NoxPlayer emulador, siga estos pasos:
-
Usando el emulador de BlueStacks
-
-
Descargar e instalar el emulador BlueStacks desde su sitio web oficial[ 3 ] .
-
Inicie BlueStacks e inicie sesión con su cuenta de Google o cree una nueva.
-
Abra la aplicación del navegador en BlueStacks y vaya a APKPure.com. Buscar Aparcamiento de coches multijugador APK Skachat y descargarlo en su PC.
-
Busque el archivo descargado en su PC y haga clic derecho en él. Elija "Abrir con" y seleccione BlueStacks como el emulador.
-
Espere a que el proceso de instalación se complete y luego abra el juego desde la pantalla de inicio de BlueStacks.
-
-
Usando el emulador de NoxPlayer
-
-
Descargar e instalar el emulador NoxPlayer desde su sitio web oficial.
-
Inicie NoxPlayer e inicie sesión con su cuenta de Google o cree una nueva.
-
-
Arrastre y suelte el archivo descargado a la ventana NoxPlayer y espere a que se complete el proceso de instalación.
-
Abre el juego desde la pantalla de inicio de NoxPlayer y disfruta.
-
-
¿Cómo se juega Aparcamiento de coches multijugador en su PC?
-
Una vez que haya descargado e instalado Aparcamiento Multijugador APK Skachat en su PC utilizando BlueStacks o NoxPlayer emulador, puede comenzar a jugar el juego en su PC. Aquí hay algunos consejos sobre cómo jugar el juego en su PC:
-
Controles y ajustes
-
Puedes usar el teclado y el ratón para controlar el juego en tu PC. También puedes personalizar la asignación de teclas según tus preferencias. Para ello, haga clic en el icono del teclado en la esquina inferior derecha de la pantalla del emulador y elija "Controles del juego". A continuación, puede arrastrar y soltar las teclas de los botones correspondientes en la pantalla del juego. También puede ajustar la sensibilidad, la transparencia y el tamaño de las teclas. Para guardar la configuración, haga clic en "Guardar" y luego en "Cerrar".
-
También puede cambiar la configuración del juego como gráficos, sonido, idioma, cámara, etc. haciendo clic en el icono de engranaje en la esquina superior derecha de la pantalla del juego. A continuación, puede elegir entre baja, media, alta o ultra calidad gráfica, habilitar o desactivar efectos de sonido y música, seleccionar su idioma preferido, cambiar entre diferentes modos de cámara, etc. Para aplicar los cambios, haga clic en "OK".
-
-
Consejos y trucos
-
Aquí hay algunos consejos y trucos para ayudarle a jugar Car Parking Multijugador mejor en su PC:
-
-
Utilice el mini-mapa en la esquina superior izquierda de la pantalla del juego para navegar por el mundo abierto. También puede acercar o alejar usando la rueda del ratón.
-
Utilice el icono de la gasolinera en el mini-mapa para encontrar la gasolinera más cercana donde puede repostar su coche. También puede utilizar el icono de servicio de automóvil para encontrar el servicio de automóvil más cercano donde puede reparar su automóvil o cambiar sus piezas.
-
-
Utilice el icono de menú en la esquina inferior izquierda de la pantalla de juego para acceder a varias opciones como el modo multijugador, intercambio de coches, garaje, perfil, configuración, etc.
-
Utilice el icono de estacionamiento en la esquina inferior derecha de la pantalla de juego para iniciar un desafío de estacionamiento. Puedes elegir entre diferentes niveles de dificultad y ubicaciones. También puede ver su progreso y logros haciendo clic en el icono de trofeo al lado.
-
Utilice el icono de carrera en la esquina inferior derecha de la pantalla del juego para iniciar un desafío de carreras multijugador. Puede elegir entre diferentes modos, como carrera de arrastre, carrera de deriva, carrera de circuito, etc. También puede ver su clasificación y recompensas haciendo clic en el icono de copa al lado.
-
-
Conclusión
-
Car Parking Multiplayer es un divertido y realista juego de simulación de aparcamiento que puedes jugar en tu PC usando un emulador de Android como BlueStacks o NoxPlayer. Puede descargar e instalar Aparcamiento de coches multijugador APK Skachat de forma gratuita desde APKPure.com y disfrutar de sus características tales como modo multijugador mundo abierto, personalización del coche, alta-altográficos de calidad, etc. Esperamos que este artículo le ha ayudado a aprender a descargar y jugar Aparcamiento de coches multijugador APK Skachat en su PC. Si tiene alguna pregunta o comentario, háganoslo saber en los comentarios a continuación.
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes acerca de Aparcamiento de coches multijugador APK Skachat:
-
¿Es seguro para descargar aparcamiento multijugador APK Skachat?
-
Sí, Aparcamiento de coches multijugador APK Skachat es seguro para descargar siempre y cuando se utiliza una fuente de confianza como APKPure.com. Sin embargo, siempre debe tener cuidado al descargar e instalar cualquier archivo APK de fuentes desconocidas, ya que pueden contener malware o virus que pueden dañar su dispositivo o datos. También debe comprobar la información del archivo y las opiniones de los usuarios antes de descargar e instalar cualquier archivo APK.
-
¿Cuáles son las ventajas de jugar Car Parking Multijugador en PC?
-
-
-
Puede disfrutar de una pantalla más grande y una mejor calidad de gráficos en su PC.
-
Puede utilizar el teclado y el ratón para controlar el juego con mayor facilidad y precisión en su PC.
-
Puede ahorrar su batería y espacio de almacenamiento en su dispositivo móvil jugando el juego en su PC.
-
Puedes jugar el juego sin interrupciones o distracciones de llamadas telefónicas, mensajes, notificaciones, etc. en tu PC.
-
-
¿Puedo jugar Aparcamiento de coches multijugador fuera de línea?
-
No, no puedes jugar Car Parking Multijugador sin conexión, ya que requiere una conexión a Internet para acceder a algunas de sus características como el modo multijugador, chat en línea, intercambio de coches, etc. Sin embargo, todavía se puede jugar el juego en una solamodo de reproductor sin conexión a Internet mediante la elección de la opción sin conexión del menú.
-
¿Cómo puedo actualizar Aparcamiento de coches multijugador APK Skachat?
-
Para actualizar Aparcamiento Multijugador APK Skachat, es necesario descargar e instalar la última versión del archivo APK de APKPure.com o cualquier otra fuente confiable. También puedes buscar actualizaciones de la configuración del juego haciendo clic en el icono del engranaje y luego elegir "Buscar actualizaciones". Si hay una nueva versión disponible, puedes descargarla e instalarla desde allí.
-
¿Cómo puedo contactar al desarrollador de Car Parking Multijugador?
-
Si tienes alguna pregunta, sugerencia, comentario, o problemas con respecto a Car Parking Multijugador, puede ponerse en contacto con el desarrollador del juego enviando un correo electrónico a olzhass@gmail.com o visitando su página de Facebook. También puede unirse a su servidor de discordia para chatear con otros jugadores y obtener apoyo de los moderadores.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Apk Caso Penal Con Trampa.md b/spaces/Benson/text-generation/Examples/Apk Caso Penal Con Trampa.md
deleted file mode 100644
index e5b3f34a36ab5b0d00ae1770a93c0ecde995b83a..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Apk Caso Penal Con Trampa.md
+++ /dev/null
@@ -1,68 +0,0 @@
-
-
Cómo descargar e instalar caso penal APK con Cheat
-
Si te gusta jugar juegos de detectives en tu dispositivo Android, es posible que haya oído hablar de Criminal Case. Es un popular juego de objetos ocultos donde tienes que investigar casos de asesinato, encontrar pistas, interrogar a los sospechosos y atrapar a los asesinos. Pero lo que si quieres hacer el juego más divertido y fácil? Ahí es donde Criminal Case APK con engaño entra en juego. En este artículo, te mostraremos cómo descargar e instalar esta versión modificada del juego que te da energía ilimitada, pistas, estrellas y más. También te explicaremos qué es un archivo APK, cómo instalarlo en tu dispositivo, cómo jugar a Criminal Case con trucos y cuáles son los pros y los contras de usarlo.
-
¿Qué es un archivo APK y cómo instalarlo en Android
-
Un archivo APK es un archivo de paquete de Android que contiene todos los archivos y el código necesario para ejecutar una aplicación en su dispositivo Android. Es similar a un archivo EXE en Windows o un archivo DMG en Mac. Los archivos APK se utilizan generalmente para distribuir aplicaciones que no están disponibles en Google Play Store, o para actualizar aplicaciones antes de su lanzamiento oficial. También puedes usar archivos APK para instalar versiones modificadas o hackeadas de aplicaciones que ofrecen características o beneficios adicionales.
Para instalar un archivo APK en tu dispositivo Android, necesitas hacer dos cosas. Primero, necesitas habilitar fuentes desconocidas en la configuración de tu dispositivo. Esto le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store. Para hacer esto, ve a Configuración > Aplicaciones > Menú > Acceso especial > Instalar aplicaciones desconocidas. Luego, selecciona la aplicación de tu navegador (como Chrome) y activa la opción Permitir desde esta fuente.
-
-
¿Qué es el caso penal APK con Cheat
-
Caso Penal APK con trampa es una versión modificada de Caso Penal que le da acceso a recursos ilimitados y características que pueden ayudarle a resolver los casos más rápido y más fácil. Algunas de las características incluyen:
-
-
Energía ilimitada: Puedes reproducir tantas escenas como quieras sin quedarte sin energía.
-
Pistas ilimitadas: Puedes usar pistas para encontrar objetos más rápido y ganar más puntos.
-
Estrellas ilimitadas: puedes usar estrellas para desbloquear nuevas escenas, examinar pistas, interrogar sospechosos y arrestar asesinos.
-
Análisis instantáneo: No tienes que esperar a los resultados de laboratorio o informes. Puedes obtenerlos al instante.
-
Saltar escenas y minijuegos: Puedes saltarte cualquier escena o mini-juego que no quieras jugar.
-
No hay anuncios: Puedes disfrutar del juego sin interrupciones ni distracciones.
-
-
Con estas características, usted puede tener más diversión y emoción jugando Criminal Case. También puedes ahorrar tiempo y dinero al no tener que comprar energía o pistas con dinero real.
-
Cómo descargar caso penal APK con Cheat
-
Para descargar Criminal Case APK con cheat, es necesario seguir estos pasos:
1. Ir a un sitio web que ofrece APK Criminal Case con cheat. Puede utilizar la aplicación de su navegador para buscar estos sitios web, o puede utilizar uno de los siguientes enlaces:
2. Elija la versión de Criminal Case APK con truco que desea descargar. Asegúrese de que es compatible con su dispositivo y tiene las características que desea.
-
-
4. Una vez descargado el archivo, búsquelo en su dispositivo usando la aplicación del navegador o una aplicación de administrador de archivos. Toque el archivo para instalarlo. Es posible que necesite aceptar algunas ventanas emergentes o permisos antes de instalar el archivo.
-
-
5. Después de que la instalación se haya completado, puede iniciar el juego desde el cajón de la aplicación o la pantalla de inicio. ¡Disfrute jugando Criminal Case con trucos!
-
Cómo Jugar Caso Criminal con Cheat
-
Jugar a Criminal Case con trucos es similar a jugar la versión original del juego, excepto que tienes acceso a recursos ilimitados y características que pueden hacer el juego más fácil y más divertido. Aquí hay algunos consejos y trucos sobre cómo jugar Criminal Case con cheat:
-
-
Para usar energía ilimitada, toca el icono de energía en la esquina superior derecha de la pantalla. Puedes recargar tu energía tantas veces como quieras sin esperar ni pagar.
-
Para usar pistas ilimitadas, toque el icono de pista en la esquina inferior derecha de la pantalla durante una escena. Puedes usar pistas tantas veces como quieras sin perder puntos o estrellas.
-
Para usar estrellas ilimitadas, toca el icono de estrella en la esquina superior izquierda de la pantalla. Puedes usar estrellas tantas veces como quieras para desbloquear nuevas escenas, examinar pistas, interrogar sospechosos y arrestar asesinos.
-
Para utilizar el análisis instantáneo, toque el icono de análisis en la esquina inferior izquierda de la pantalla durante una escena. Puedes obtener resultados instantáneos sin esperar ni pagar.
-
Para saltar escenas y minijuegos, toque el icono de salto en la esquina superior derecha de la pantalla durante una escena o un mini-juego. Puedes saltarte cualquier escena o mini-juego que no quieras jugar sin perder puntos o estrellas.
-
Para eliminar anuncios, toque el icono de configuración en la esquina superior derecha de la pantalla. Luego, toque la opción de eliminar anuncios y confirme su elección. Puedes disfrutar del juego sin interrupciones ni distracciones.
-
-
-
Pros y contras de usar APK caso penal con trampa
-
El uso de APK Caso Penal con trampa tiene sus pros y sus contras. Aquí están algunos de ellos:
- | Pros | Contras | | -- | -- - | | Usted puede tener más diversión y emoción jugando Caso Criminal | Usted puede perder algo del desafío y emoción de jugar Caso Criminal | | | Usted puede ahorrar tiempo y dinero por no tener que comprar energía o pistas con dinero real | Usted puede encontrar algunos errores o errores que pueden afectar su rendimiento del juego | | Puede probar diferentes características y opciones que no están disponibles en la versión original del juego | Puede violar algunos términos y condiciones del desarrollador del juego o Google Play Store | | Puede compartir sus logros y progresos con sus amigos y otros jugadores | Usted puede correr el riesgo de perder sus datos de juego o cuenta si desinstalar o actualizar el juego |
Usted debe sopesar estos pros y contras antes de decidir si desea utilizar Criminal Case APK con trampa o no. En última instancia, depende de su preferencia personal y estilo de juego.
-
Conclusión
-
Criminal Case es un divertido y adictivo juego de objetos ocultos que te permite jugar como detective y resolver casos de asesinato. Pero si quieres hacer el juego más divertido y fácil, se puede tratar de usar Caso Penal APK con trampa. Esta es una versión modificada del juego que te da energía ilimitada, pistas, estrellas y más. Puede descargar e instalar esta versión desde un sitio web de buena reputación y disfrutar jugando Criminal Case con trampa.
-
Preguntas frecuentes
-
Aquí hay algunas preguntas y respuestas frecuentes sobre Caso Penal APK con trampa:
-
-
-
¿Es legal usar Caso Penal APK con trampa? El uso de APK Caso Penal con trampa puede no ser legal en algunos países o regiones, ya que puede violar algunos términos y condiciones del desarrollador del juego o Google Play Store. Usted debe comprobar las leyes y reglamentos de su ubicación antes de usar Caso Penal APK con trampa. También debe respetar los derechos e intereses del desarrollador del juego y otros jugadores, y no utilizar Criminal Case APK con engaño para cualquier propósito malicioso o fraudulento.
-
Se Caso Penal APK con trucos de trabajo en mi dispositivo? Caso Penal APK con trampa debe funcionar en la mayoría de los dispositivos Android que soportan la versión original de Caso Penal. Sin embargo, algunos dispositivos pueden no ser compatibles con Criminal Case APK con trampa, o pueden experimentar algunos problemas o errores al usarlo. Usted debe comprobar la compatibilidad y los requisitos de Caso Penal APK con tramposo antes de descargar e instalar en su dispositivo. También debe actualizar el software y la configuración del dispositivo para garantizar un rendimiento óptimo.
-
¿Puedo jugar APK Caso Penal con tramposo en línea o fuera de línea? Puedes jugar APK Caso Penal con tramposo tanto en línea como fuera de línea. Sin embargo, algunas características y funciones pueden requerir una conexión a Internet para funcionar correctamente, como sincronizar los datos y la cuenta del juego, acceder a nuevos casos y actualizaciones o interactuar con otros jugadores. Usted debe asegurarse de que tiene una conexión a Internet estable y segura al jugar Caso Penal APK con trampa en línea.
-
-
-
Espero que este artículo le ha ayudado a aprender más acerca de Caso Penal APK con trampa y cómo descargar e instalar en su dispositivo Android. Si tiene alguna pregunta o comentario, por favor deje un comentario abajo. ¡Gracias por leer!
-
-CUB provides state-of-the-art, reusable software components for every layer
-of the CUDA programming model:
-- [Device-wide primitives](https://nvlabs.github.com/cub/group___device_module.html)
- - Sort, prefix scan, reduction, histogram, etc.
- - Compatible with CUDA dynamic parallelism
-- [Block-wide "collective" primitives](https://nvlabs.github.com/cub/group___block_module.html)
- - I/O, sort, prefix scan, reduction, histogram, etc.
- - Compatible with arbitrary thread block sizes and types
-- [Warp-wide "collective" primitives](https://nvlabs.github.com/cub/group___warp_module.html)
- - Warp-wide prefix scan, reduction, etc.
- - Safe and architecture-specific
-- [Thread and resource utilities](https://nvlabs.github.com/cub/group___thread_module.html)
- - PTX intrinsics, device reflection, texture-caching iterators, caching memory allocators, etc.
-
-
-
-CUB is included in the NVIDIA HPC SDK and the CUDA Toolkit.
-
-We recommend the [CUB Project Website](http://nvlabs.github.com/cub) for further information and examples.
-
-
-
A Simple Example
-
-```C++
-#include
-
-// Block-sorting CUDA kernel
-__global__ void BlockSortKernel(int *d_in, int *d_out)
-{
- using namespace cub;
-
- // Specialize BlockRadixSort, BlockLoad, and BlockStore for 128 threads
- // owning 16 integer items each
- typedef BlockRadixSort BlockRadixSort;
- typedef BlockLoad BlockLoad;
- typedef BlockStore BlockStore;
-
- // Allocate shared memory
- __shared__ union {
- typename BlockRadixSort::TempStorage sort;
- typename BlockLoad::TempStorage load;
- typename BlockStore::TempStorage store;
- } temp_storage;
-
- int block_offset = blockIdx.x * (128 * 16); // OffsetT for this block's ment
-
- // Obtain a segment of 2048 consecutive keys that are blocked across threads
- int thread_keys[16];
- BlockLoad(temp_storage.load).Load(d_in + block_offset, thread_keys);
- __syncthreads();
-
- // Collectively sort the keys
- BlockRadixSort(temp_storage.sort).Sort(thread_keys);
- __syncthreads();
-
- // Store the sorted segment
- BlockStore(temp_storage.store).Store(d_out + block_offset, thread_keys);
-}
-```
-
-Each thread block uses `cub::BlockRadixSort` to collectively sort
-its own input segment. The class is specialized by the
-data type being sorted, by the number of threads per block, by the number of
-keys per thread, and implicitly by the targeted compilation architecture.
-
-The `cub::BlockLoad` and `cub::BlockStore` classes are similarly specialized.
-Furthermore, to provide coalesced accesses to device memory, these primitives are
-configured to access memory using a striped access pattern (where consecutive threads
-simultaneously access consecutive items) and then transpose the keys into
-a [blocked arrangement](index.html#sec4sec3) of elements across threads.
-
-Once specialized, these classes expose opaque `TempStorage` member types.
-The thread block uses these storage types to statically allocate the union of
-shared memory needed by the thread block. (Alternatively these storage types
-could be aliased to global memory allocations).
-
-
-
-
-CUB uses the [CMake build system](https://cmake.org/) to build unit tests,
-examples, and header tests. To build CUB as a developer, the following
-recipe should be followed:
-
-```
-# Clone CUB repo from github:
-git clone https://github.com/thrust/cub.git
-cd cub
-
-# Create build directory:
-mkdir build
-cd build
-
-# Configure -- use one of the following:
-cmake .. # Command line interface.
-ccmake .. # ncurses GUI (Linux only)
-cmake-gui # Graphical UI, set source/build directories in the app
-
-# Build:
-cmake --build . -j # invokes make (or ninja, etc)
-
-# Run tests and examples:
-ctest
-```
-
-By default, the C++14 standard is targeted, but this can be changed in CMake.
-More information on configuring your CUB build and creating a pull request is
-found in [CONTRIBUTING.md](CONTRIBUTING.md).
-
-
-
Open Source License
-
-CUB is available under the "New BSD" open-source license:
-
-```
-Copyright (c) 2010-2011, Duane Merrill. All rights reserved.
-Copyright (c) 2011-2018, NVIDIA CORPORATION. All rights reserved.
-
-Redistribution and use in source and binary forms, with or without
-modification, are permitted provided that the following conditions are met:
- * Redistributions of source code must retain the above copyright
- notice, this list of conditions and the following disclaimer.
- * Redistributions in binary form must reproduce the above copyright
- notice, this list of conditions and the following disclaimer in the
- documentation and/or other materials provided with the distribution.
- * Neither the name of the NVIDIA CORPORATION nor the
- names of its contributors may be used to endorse or promote products
- derived from this software without specific prior written permission.
-
-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
-ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
-DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
-DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
-ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
-SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-```
diff --git a/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/minimum_category.h b/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/minimum_category.h
deleted file mode 100644
index abb80d8c1048353490ab6c4ddc238af1bea76b9f..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/minimum_category.h
+++ /dev/null
@@ -1,52 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-namespace thrust
-{
-
-namespace detail
-{
-
-template
- struct minimum_category
- : minimum_type
-{
-}; // end minimum_category
-
-} // end detail
-
-} // end thrust
-
-
diff --git a/spaces/CVPR/MonoScene/monoscene/modules.py b/spaces/CVPR/MonoScene/monoscene/modules.py
deleted file mode 100644
index 3e8bf875ccd6dffb51bb5acb25f0302fe0032d6c..0000000000000000000000000000000000000000
--- a/spaces/CVPR/MonoScene/monoscene/modules.py
+++ /dev/null
@@ -1,194 +0,0 @@
-import torch
-import torch.nn as nn
-from monoscene.DDR import Bottleneck3D
-
-
-class ASPP(nn.Module):
- """
- ASPP 3D
- Adapt from https://github.com/cv-rits/LMSCNet/blob/main/LMSCNet/models/LMSCNet.py#L7
- """
-
- def __init__(self, planes, dilations_conv_list):
- super().__init__()
-
- # ASPP Block
- self.conv_list = dilations_conv_list
- self.conv1 = nn.ModuleList(
- [
- nn.Conv3d(
- planes, planes, kernel_size=3, padding=dil, dilation=dil, bias=False
- )
- for dil in dilations_conv_list
- ]
- )
- self.bn1 = nn.ModuleList(
- [nn.BatchNorm3d(planes) for dil in dilations_conv_list]
- )
- self.conv2 = nn.ModuleList(
- [
- nn.Conv3d(
- planes, planes, kernel_size=3, padding=dil, dilation=dil, bias=False
- )
- for dil in dilations_conv_list
- ]
- )
- self.bn2 = nn.ModuleList(
- [nn.BatchNorm3d(planes) for dil in dilations_conv_list]
- )
- self.relu = nn.ReLU()
-
- def forward(self, x_in):
-
- y = self.bn2[0](self.conv2[0](self.relu(self.bn1[0](self.conv1[0](x_in)))))
- for i in range(1, len(self.conv_list)):
- y += self.bn2[i](self.conv2[i](self.relu(self.bn1[i](self.conv1[i](x_in)))))
- x_in = self.relu(y + x_in) # modified
-
- return x_in
-
-
-class SegmentationHead(nn.Module):
- """
- 3D Segmentation heads to retrieve semantic segmentation at each scale.
- Formed by Dim expansion, Conv3D, ASPP block, Conv3D.
- Taken from https://github.com/cv-rits/LMSCNet/blob/main/LMSCNet/models/LMSCNet.py#L7
- """
-
- def __init__(self, inplanes, planes, nbr_classes, dilations_conv_list):
- super().__init__()
-
- # First convolution
- self.conv0 = nn.Conv3d(inplanes, planes, kernel_size=3, padding=1, stride=1)
-
- # ASPP Block
- self.conv_list = dilations_conv_list
- self.conv1 = nn.ModuleList(
- [
- nn.Conv3d(
- planes, planes, kernel_size=3, padding=dil, dilation=dil, bias=False
- )
- for dil in dilations_conv_list
- ]
- )
- self.bn1 = nn.ModuleList(
- [nn.BatchNorm3d(planes) for dil in dilations_conv_list]
- )
- self.conv2 = nn.ModuleList(
- [
- nn.Conv3d(
- planes, planes, kernel_size=3, padding=dil, dilation=dil, bias=False
- )
- for dil in dilations_conv_list
- ]
- )
- self.bn2 = nn.ModuleList(
- [nn.BatchNorm3d(planes) for dil in dilations_conv_list]
- )
- self.relu = nn.ReLU()
-
- self.conv_classes = nn.Conv3d(
- planes, nbr_classes, kernel_size=3, padding=1, stride=1
- )
-
- def forward(self, x_in):
-
- # Convolution to go from inplanes to planes features...
- x_in = self.relu(self.conv0(x_in))
-
- y = self.bn2[0](self.conv2[0](self.relu(self.bn1[0](self.conv1[0](x_in)))))
- for i in range(1, len(self.conv_list)):
- y += self.bn2[i](self.conv2[i](self.relu(self.bn1[i](self.conv1[i](x_in)))))
- x_in = self.relu(y + x_in) # modified
-
- x_in = self.conv_classes(x_in)
-
- return x_in
-
-
-class ProcessKitti(nn.Module):
- def __init__(self, feature, norm_layer, bn_momentum, dilations=[1, 2, 3]):
- super(Process, self).__init__()
- self.main = nn.Sequential(
- *[
- Bottleneck3D(
- feature,
- feature // 4,
- bn_momentum=bn_momentum,
- norm_layer=norm_layer,
- dilation=[i, i, i],
- )
- for i in dilations
- ]
- )
-
- def forward(self, x):
- return self.main(x)
-
-
-class Process(nn.Module):
- def __init__(self, feature, norm_layer, bn_momentum, dilations=[1, 2, 3]):
- super(Process, self).__init__()
- self.main = nn.Sequential(
- *[
- Bottleneck3D(
- feature,
- feature // 4,
- bn_momentum=bn_momentum,
- norm_layer=norm_layer,
- dilation=[i, i, i],
- )
- for i in dilations
- ]
- )
-
- def forward(self, x):
- return self.main(x)
-
-
-class Upsample(nn.Module):
- def __init__(self, in_channels, out_channels, norm_layer, bn_momentum):
- super(Upsample, self).__init__()
- self.main = nn.Sequential(
- nn.ConvTranspose3d(
- in_channels,
- out_channels,
- kernel_size=3,
- stride=2,
- padding=1,
- dilation=1,
- output_padding=1,
- ),
- norm_layer(out_channels, momentum=bn_momentum),
- nn.ReLU(),
- )
-
- def forward(self, x):
- return self.main(x)
-
-
-class Downsample(nn.Module):
- def __init__(self, feature, norm_layer, bn_momentum, expansion=8):
- super(Downsample, self).__init__()
- self.main = Bottleneck3D(
- feature,
- feature // 4,
- bn_momentum=bn_momentum,
- expansion=expansion,
- stride=2,
- downsample=nn.Sequential(
- nn.AvgPool3d(kernel_size=2, stride=2),
- nn.Conv3d(
- feature,
- int(feature * expansion / 4),
- kernel_size=1,
- stride=1,
- bias=False,
- ),
- norm_layer(int(feature * expansion / 4), momentum=bn_momentum),
- ),
- norm_layer=norm_layer,
- )
-
- def forward(self, x):
- return self.main(x)
diff --git a/spaces/CVPR/drawings-to-human/main.py b/spaces/CVPR/drawings-to-human/main.py
deleted file mode 100644
index 4383dff4c849fe1564a48f33b271ea8771ff27b7..0000000000000000000000000000000000000000
--- a/spaces/CVPR/drawings-to-human/main.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import subprocess
-
-subprocess.run(["make", "build-all"], shell=False)
\ No newline at end of file
diff --git a/spaces/CVPR/regionclip-demo/detectron2/modeling/roi_heads/__init__.py b/spaces/CVPR/regionclip-demo/detectron2/modeling/roi_heads/__init__.py
deleted file mode 100644
index 3950e1bc22dfd9024b5371ae9fdb0fe4a45ab0e1..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/modeling/roi_heads/__init__.py
+++ /dev/null
@@ -1,35 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from .box_head import ROI_BOX_HEAD_REGISTRY, build_box_head, FastRCNNConvFCHead
-from .keypoint_head import (
- ROI_KEYPOINT_HEAD_REGISTRY,
- build_keypoint_head,
- BaseKeypointRCNNHead,
- KRCNNConvDeconvUpsampleHead,
-)
-from .mask_head import (
- ROI_MASK_HEAD_REGISTRY,
- build_mask_head,
- BaseMaskRCNNHead,
- MaskRCNNConvUpsampleHead,
-)
-from .roi_heads import (
- ROI_HEADS_REGISTRY,
- ROIHeads,
- Res5ROIHeads,
- StandardROIHeads,
- build_roi_heads,
- select_foreground_proposals,
-)
-from .clip_roi_heads import (
- CLIPRes5ROIHeads,
- CLIPSwinROIHeads,
- PretrainRes5ROIHeads,
- CLIPStandardROIHeads,
-)
-from .cascade_rcnn import CascadeROIHeads
-from .rotated_fast_rcnn import RROIHeads
-from .fast_rcnn import FastRCNNOutputLayers
-
-from . import cascade_rcnn # isort:skip
-
-__all__ = list(globals().keys())
diff --git a/spaces/CarlDennis/Lovelive-VITS-JPZH/README.md b/spaces/CarlDennis/Lovelive-VITS-JPZH/README.md
deleted file mode 100644
index 2715416b83025d8928e6298f238b9db6690028f4..0000000000000000000000000000000000000000
--- a/spaces/CarlDennis/Lovelive-VITS-JPZH/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Lovelive VITS JPZH
-emoji: 📈
-colorFrom: purple
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.4.1
-app_file: app.py
-pinned: false
-license: cc-by-nc-3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ChallengeHub/Chinese-LangChain/tests/test_duckduckgo_search.py b/spaces/ChallengeHub/Chinese-LangChain/tests/test_duckduckgo_search.py
deleted file mode 100644
index b3a69dd0d0581e8c04cf7a17ecb95d3dab135e91..0000000000000000000000000000000000000000
--- a/spaces/ChallengeHub/Chinese-LangChain/tests/test_duckduckgo_search.py
+++ /dev/null
@@ -1,16 +0,0 @@
-from duckduckgo_search import ddg
-from duckduckgo_search.utils import SESSION
-
-
-SESSION.proxies = {
- "http": f"socks5h://localhost:7890",
- "https": f"socks5h://localhost:7890"
-}
-r = ddg("马保国")
-print(r[:2])
-"""
-[{'title': '马保国 - 维基百科,自由的百科全书', 'href': 'https://zh.wikipedia.org/wiki/%E9%A9%AC%E4%BF%9D%E5%9B%BD', 'body': '马保国(1951年 — ) ,男,籍贯 山东 临沂,出生及长大于河南,中国大陆太极拳师,自称"浑元形意太极门掌门人" 。 马保国因2017年约战mma格斗家徐晓冬首次出现
-大众视野中。 2020年5月,马保国在对阵民间武术爱好者王庆民的比赛中,30秒内被连续高速击倒三次,此事件成为了持续多日的社交 ...'}, {'title': '馬保國的主页 - 抖音', 'href': 'https://www.douyin.com/user/MS4wLjABAAAAW0E1ziOvxgUh3VVv5FE6xmoo3w5WtZalfphYZKj4mCg', 'body': '6.3万. #马马国教扛打功 最近有几个人模芳我动作,很危险啊,不可以的,朋友们不要受伤了。. 5.3万. #马保国直播带货榜第一 朋友们周末愉快,本周六早上湿点,我本人在此号进行第一次带货直播,活到老,学到老,越活越年轻。. 7.0万. #马保国击破红牛罐 昨天 ...'}]
-
-
-"""
\ No newline at end of file
diff --git a/spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/mysite/andrew_alpha/0_object_detection_model/GroundingDINO_SwinT_OGC.cfg.py b/spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/mysite/andrew_alpha/0_object_detection_model/GroundingDINO_SwinT_OGC.cfg.py
deleted file mode 100644
index 9158d5f6260ec74bded95377d382387430d7cd70..0000000000000000000000000000000000000000
--- a/spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/mysite/andrew_alpha/0_object_detection_model/GroundingDINO_SwinT_OGC.cfg.py
+++ /dev/null
@@ -1,43 +0,0 @@
-batch_size = 1
-modelname = "groundingdino"
-backbone = "swin_T_224_1k"
-position_embedding = "sine"
-pe_temperatureH = 20
-pe_temperatureW = 20
-return_interm_indices = [1, 2, 3]
-backbone_freeze_keywords = None
-enc_layers = 6
-dec_layers = 6
-pre_norm = False
-dim_feedforward = 2048
-hidden_dim = 256
-dropout = 0.0
-nheads = 8
-num_queries = 900
-query_dim = 4
-num_patterns = 0
-num_feature_levels = 4
-enc_n_points = 4
-dec_n_points = 4
-two_stage_type = "standard"
-two_stage_bbox_embed_share = False
-two_stage_class_embed_share = False
-transformer_activation = "relu"
-dec_pred_bbox_embed_share = True
-dn_box_noise_scale = 1.0
-dn_label_noise_ratio = 0.5
-dn_label_coef = 1.0
-dn_bbox_coef = 1.0
-embed_init_tgt = True
-dn_labelbook_size = 2000
-max_text_len = 256
-text_encoder_type = "bert-base-uncased"
-use_text_enhancer = True
-use_fusion_layer = True
-use_checkpoint = True
-use_transformer_ckpt = True
-use_text_cross_attention = True
-text_dropout = 0.0
-fusion_dropout = 0.0
-fusion_droppath = 0.1
-sub_sentence_present = True
diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/other/restart.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/other/restart.js
deleted file mode 100644
index f0ef2bd228fab79e6a5de476bd0842e999060c06..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/other/restart.js
+++ /dev/null
@@ -1,122 +0,0 @@
-import plugin from '../../lib/plugins/plugin.js'
-import { createRequire } from 'module'
-
-const require = createRequire(import.meta.url)
-const { exec } = require('child_process')
-
-export class Restart extends plugin {
- constructor (e = '') {
- super({
- name: '重启',
- dsc: '#重启',
- event: 'message',
- priority: 10,
- rule: [{
- reg: '^#重启$',
- fnc: 'restart',
- permission: 'master'
- }, {
- reg: '^#(停机|关机)$',
- fnc: 'stop',
- permission: 'master'
- }]
- })
-
- if (e) this.e = e
-
- this.key = 'Yz:restart'
- }
-
- async init () {
- let restart = await redis.get(this.key)
- if (restart) {
- restart = JSON.parse(restart)
- let time = restart.time || new Date().getTime()
- time = (new Date().getTime() - time) / 1000
-
- let msg = `重启成功:耗时${time.toFixed(2)}秒`
-
- if (restart.isGroup)
- Bot.sendGroupMsg(restart.bot_id, restart.id, msg)
- else
- Bot.sendFriendMsg(restart.bot_id, restart.id, msg)
-
- redis.del(this.key)
- }
- }
-
- async restart () {
- await this.e.reply('开始执行重启,请稍等...')
- logger.mark(`${this.e.logFnc} 开始执行重启,请稍等...`)
-
- let data = JSON.stringify({
- isGroup: !!this.e.isGroup,
- id: this.e.isGroup ? this.e.group_id : this.e.user_id,
- bot_id: this.e.self_id,
- time: new Date().getTime()
- })
-
- let npm = await this.checkPnpm()
-
- try {
- await redis.set(this.key, data, { EX: 120 })
- let cm = `${npm} start`
- if (process.argv[1].includes('pm2')) {
- cm = `${npm} run restart`
- }
-
- exec(cm, { windowsHide: true }, (error, stdout, stderr) => {
- if (error) {
- redis.del(this.key)
- this.e.reply(`操作失败!\n${error.stack}`)
- logger.error(`重启失败\n${error.stack}`)
- } else if (stdout) {
- logger.mark('重启成功,运行已由前台转为后台')
- logger.mark(`查看日志请用命令:${npm} run log`)
- logger.mark(`停止后台运行命令:${npm} stop`)
- process.exit()
- }
- })
- } catch (error) {
- redis.del(this.key)
- let e = error.stack ?? error
- this.e.reply(`操作失败!\n${e}`)
- }
-
- return true
- }
-
- async checkPnpm () {
- let npm = 'npm'
- let ret = await this.execSync('pnpm -v')
- if (ret.stdout) npm = 'pnpm'
- return npm
- }
-
- async execSync (cmd) {
- return new Promise((resolve, reject) => {
- exec(cmd, { windowsHide: true }, (error, stdout, stderr) => {
- resolve({ error, stdout, stderr })
- })
- })
- }
-
- async stop () {
- if (!process.argv[1].includes('pm2')) {
- logger.mark('关机成功,已停止运行')
- await this.e.reply('关机成功,已停止运行')
- process.exit()
- }
-
- logger.mark('关机成功,已停止运行')
- await this.e.reply('关机成功,已停止运行')
-
- let npm = await this.checkPnpm()
- exec(`${npm} stop`, { windowsHide: true }, (error, stdout, stderr) => {
- if (error) {
- this.e.reply(`操作失败!\n${error.stack}`)
- logger.error(`关机失败\n${error.stack}`)
- }
- })
- }
-}
\ No newline at end of file
diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/ascension/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/ascension/__init__.py
deleted file mode 100644
index 933c575873e7af8e9fca21c857a2c19f99f0cbe1..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/meme-api/meme_generator/memes/ascension/__init__.py
+++ /dev/null
@@ -1,35 +0,0 @@
-from pathlib import Path
-from typing import List
-
-from pil_utils import BuildImage
-
-from meme_generator import add_meme
-from meme_generator.exception import TextOverLength
-
-img_dir = Path(__file__).parent / "images"
-
-
-def ascension(images, texts: List[str], args):
- frame = BuildImage.open(img_dir / "0.png")
- text = f"你原本应该要去地狱的,但因为你生前{texts[0]},我们就当作你已经服完刑期了"
- try:
- frame.draw_text(
- (40, 30, 482, 135),
- text,
- allow_wrap=True,
- max_fontsize=50,
- min_fontsize=20,
- )
- except ValueError:
- raise TextOverLength(texts[0])
- return frame.save_jpg()
-
-
-add_meme(
- "ascension",
- ascension,
- min_texts=1,
- max_texts=1,
- default_texts=["学的是机械"],
- keywords=["升天"],
-)
diff --git a/spaces/Cong723/gpt-academic-public/docs/README_RS.md b/spaces/Cong723/gpt-academic-public/docs/README_RS.md
deleted file mode 100644
index f8d925a27a6e5a19304db6f6d266e3bb3163172f..0000000000000000000000000000000000000000
--- a/spaces/Cong723/gpt-academic-public/docs/README_RS.md
+++ /dev/null
@@ -1,291 +0,0 @@
-> **Note**
->
-> Этот файл самовыражения автоматически генерируется модулем перевода markdown в этом проекте и может быть не на 100% правильным.
->
-
-# ChatGPT Academic Optimization
-
-**Если вам понравился этот проект, пожалуйста, поставьте ему звезду. Если вы придумали более полезные академические ярлыки или функциональные плагины, не стесняйтесь создавать запросы на изменение или пул-запросы. Мы также имеем [README на английском языке](docs/README_EN.md), переведенный этим же проектом.
-
-> **Примечание**
->
-> 1. Пожалуйста, обратите внимание, что только функциonal plugins (buttons) с **красным цветом** могут читать файлы, некоторые из которых находятся в **выпадающем меню** плагинов. Кроме того, мы приветствуем и обрабатываем любые новые плагины с **наивысшим приоритетом**!
->
-> 2. Функции каждого файла в этом проекте подробно описаны в собственном анализе [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) . При повторных итерациях вы также можете вызывать обновленный отчет функций проекта, щелкнув соответствующий функциональный плагин GPT. Часто задаваемые вопросы собраны в [`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98) .
-
-
-
-Функция | Описание
---- | ---
-Редактирование одним кликом | Поддержка редактирования одним кликом, поиск грамматических ошибок в академических статьях
-Переключение языков "Английский-Китайский" одним кликом | Одним кликом переключайте языки "Английский-Китайский"
-Разъяснение программного кода одним кликом | Вы можете правильно отобразить и объяснить программный код.
-[Настраиваемые сочетания клавиш](https://www.bilibili.com/video/BV14s4y1E7jN) | Поддержка настраиваемых сочетаний клавиш
-[Настройка сервера-прокси](https://www.bilibili.com/video/BV1rc411W7Dr) | Поддержка настройки сервера-прокси
-Модульный дизайн | Поддержка настраиваемых функциональных плагинов высших порядков и функциональных плагинов, поддерживающих [горячее обновление](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
-[Автоанализ программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] [Прочтение в один клик](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) кода программы проекта
-[Анализ программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] Один клик для проанализирования дерева других проектов Python/C/C++/Java/Lua/...
-Чтение статей| [Функциональный плагин] Одним кликом прочитайте весь латех (LaTex) текст статьи и сгенерируйте краткое описание
-Перевод и редактирование всех статей из LaTex | [Функциональный плагин] Перевод или редактирование LaTex-статьи всего одним нажатием кнопки
-Генерация комментариев в пакетном режиме | [Функциональный плагин] Одним кликом сгенерируйте комментарии к функциям в пакетном режиме
-Генерация отчетов пакета CHAT | [Функциональный плагин] Автоматически создавайте сводные отчеты после выполнения
-[Помощник по arxiv](https://www.bilibili.com/video/BV1LM4y1279X) | [Функциональный плагин] Введите URL статьи arxiv, чтобы легко перевести резюме и загрузить PDF-файл
-[Перевод полного текста статьи в формате PDF](https://www.bilibili.com/video/BV1KT411x7Wn) | [Функциональный плагин] Извлеките заголовок статьи, резюме и переведите весь текст статьи (многопоточно)
-[Помощник интеграции Google Scholar](https://www.bilibili.com/video/BV19L411U7ia) | [Функциональный плагин] Дайте GPT выбрать для вас интересные статьи на любой странице поиска Google Scholar.
-Отображение формул/изображений/таблиц | Одновременно отображается tex-форма и рендер-форма формул, поддержка формул, высокоскоростных кодов
-Поддержка функциональных плагинов многопоточности | Поддержка многопоточной работы с плагинами, обрабатывайте огромные объемы текста или программы одним кликом
-Запуск темной темы gradio[подробнее](https://github.com/binary-husky/chatgpt_academic/issues/173) | Добавьте / ?__dark-theme=true в конец URL браузера, чтобы переключиться на темную тему.
-[Поддержка нескольких моделей LLM](https://www.bilibili.com/video/BV1wT411p7yf), поддержка API2D | Находиться между GPT3.5, GPT4 и [清华ChatGLM](https://github.com/THUDM/ChatGLM-6B) должно быть очень приятно, не так ли?
-Альтернатива huggingface без использования научной сети [Онлайн-эксперимент](https://huggingface.co/spaces/qingxu98/gpt-academic) | Войдите в систему, скопируйте пространство [этот пространственный URL](https://huggingface.co/spaces/qingxu98/gpt-academic)
-…… | ……
-
-
-
-
-- Новый интерфейс (вы можете изменить настройку LAYOUT в config.py, чтобы переключаться между "горизонтальным расположением" и "вертикальным расположением")
-
-
-
-
-
-Вы профессиональный переводчик научных статей.
-
-- Все кнопки генерируются динамически путем чтения functional.py и могут быть легко настроены под пользовательские потребности, освобождая буфер обмена.
-
-
-
-
-- Редактирование/корректирование
-
-
-
-
-- Если вывод содержит формулы, они отображаются одновременно как в формате tex, так и в рендеринговом формате для удобства копирования и чтения.
-
-
-
-
-- Лень смотреть код проекта? Просто покажите chatgpt.
-
-
-
-
-- Несколько моделей больших языковых моделей смешиваются (ChatGLM + OpenAI-GPT3.5 + [API2D] (https://api2d.com/) -GPT4)
-
-
-
-
-Несколько моделей больших языковых моделей смешиваются в [бета-версии huggingface] (https://huggingface.co/spaces/qingxu98/academic-chatgpt-beta) (huggingface-версия не поддерживает chatglm).
-
-
----
-
-## Установка - Метод 1: Запуск (Windows, Linux или MacOS)
-
-1. Скачайте проект
-```sh
-git clone https://github.com/binary-husky/chatgpt_academic.git
-cd chatgpt_academic
-```
-
-2. Настройка API_KEY и настройки прокси
-
-В файле `config.py` настройте зарубежный прокси и OpenAI API KEY, пояснения ниже
-```
-1. Если вы находитесь в Китае, вам нужно настроить зарубежный прокси, чтобы использовать OpenAI API. Пожалуйста, внимательно прочитайте config.py для получения инструкций (1. Измените USE_PROXY на True; 2. Измените прокси в соответствии с инструкциями).
-2. Настройка API KEY OpenAI. Вам необходимо зарегистрироваться на сайте OpenAI и получить API KEY. После получения API KEY настройте его в файле config.py.
-3. Вопросы, связанные с сетевыми проблемами (тайм-аут сети, прокси не работает), можно найти здесь: https://github.com/binary-husky/chatgpt_academic/issues/1
-```
-(Примечание: при запуске программы будет проверяться наличие конфиденциального файла конфигурации с именем `config_private.py` и использоваться в нем конфигурация параметров, которая перезаписывает параметры с такими же именами в `config.py`. Поэтому, если вы понимаете логику чтения нашей конфигурации, мы настоятельно рекомендуем вам создать новый файл конфигурации с именем `config_private.py` рядом с `config.py` и переместить (скопировать) настройки из `config.py` в `config_private.py`. `config_private.py` не подвергается контролю git, что делает конфиденциальную информацию более безопасной.)
-
-
-3. Установить зависимости
-```sh
-# (Выбор 1) Рекомендуется
-python -m pip install -r requirements.txt
-
-# (Выбор 2) Если вы используете anaconda, то шаги будут аналогичны:
-# (Шаг 2.1) conda create -n gptac_venv python=3.11
-# (Шаг 2.2) conda activate gptac_venv
-# (Шаг 2.3) python -m pip install -r requirements.txt
-
-# Примечание: используйте официальный источник pip или источник pip.aliyun.com. Другие источники pip могут вызывать проблемы. временный метод замены источника:
-# python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
-```
-
-Если требуется поддержка TUNA ChatGLM, необходимо установить дополнительные зависимости (если вы неудобны с python, необходимо иметь хорошую конфигурацию компьютера):
-```sh
-python -m pip install -r request_llm/requirements_chatglm.txt
-```
-
-4. Запустите
-```sh
-python main.py
-```
-
-5. Тестовые функции плагина
-```
-- Тестирвоание анализа проекта Python
- В основной области введите `./crazy_functions/test_project/python/dqn` , а затем нажмите "Анализировать весь проект Python"
-- Тестирование самостоятельного чтения кода
- Щелкните " [Демонстрационный режим многопоточности] Проанализируйте сам проект (расшифровка источника кода)"
-- Тестирование функций шаблонного плагина (вы можете использовать эту функцию как шаблон для более сложных функций, требующих ответа от gpt в связи с тем, что произошло сегодня в истории)
- Щелкните " [Функции шаблонного плагина] День в истории"
-- На нижней панели дополнительные функции для выбора
-```
-
-## Установка - Метод 2: Использование docker (Linux)
-
-
-1. Только ChatGPT (рекомендуется для большинства пользователей):
-``` sh
-# Скачать проект
-git clone https://github.com/binary-husky/chatgpt_academic.git
-cd chatgpt_academic
-# Настроить прокси за границей и OpenAI API KEY
-Отредактируйте файл config.py в любом текстовом редакторе.
-# Установка
-docker build -t gpt-academic .
-# Запустить
-docker run --rm -it --net=host gpt-academic
-
-# Проверка функциональности плагина
-## Проверка шаблонной функции плагина (требуется, чтобы gpt ответил, что произошло "в истории на этот день"), вы можете использовать эту функцию в качестве шаблона для реализации более сложных функций.
-Нажмите "[Шаблонный демонстрационный плагин] История на этот день".
-## Тест абстрактного резюме для проекта на Latex
-В области ввода введите ./crazy_functions/test_project/latex/attention, а затем нажмите "Чтение реферата о тезисах статьи на LaTeX".
-## Тестовый анализ проекта на Python
-Введите в область ввода ./crazy_functions/test_project/python/dqn, затем нажмите "Проанализировать весь проект на Python".
-
-Выбирайте больше функциональных плагинов в нижнем выпадающем меню.
-```
-
-2. ChatGPT + ChatGLM (требуется глубокое знание Docker и достаточно мощное компьютерное оборудование):
-
-``` sh
-# Изменение Dockerfile
-cd docs && nano Dockerfile+ChatGLM
-# Как построить | Как запустить (Dockerfile+ChatGLM в пути docs, сначала перейдите в папку с помощью cd docs)
-docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM .
-# Как запустить | Как запустить (2) я хочу войти в контейнер и сделать какие-то настройки до запуска:
-docker run --rm -it --net=host --gpus=all gpt-academic bash
-```
-
-
-## Установка-Метод 3: Другие способы развертывания
-
-1. Развертывание на удаленном облачном сервере
-Пожалуйста, посетите [Deploy Wiki-1] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
-
-2. Использование WSL2 (Windows Subsystem for Linux)
-Пожалуйста, посетите [Deploy Wiki-2] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
-
-
-## Установка-Настройки прокси
-### Метод 1: Обычный способ
-[Конфигурация прокси] (https://github.com/binary-husky/chatgpt_academic/issues/1)
-
-### Метод 2: Руководство новичка
-[Руководство новичка] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89)
-
-
----
-
-## Настройка новой удобной кнопки (настройка быстрой клавиши для научной работы)
-Откройте `core_functional.py` любым текстовым редактором, добавьте элементы, как показано ниже, затем перезапустите программу. (Если кнопка уже успешно добавлена и видна, то префикс и суффикс поддерживают горячее изменение, чтобы они оказались в действии, не нужно перезапускать программу.)
-например
-```
-"Супер анг-рус": {
- # Префикс, будет добавлен перед вашим вводом. Например, используется для описания ваших потребностей, таких как перевод, кодинг, редактирование и т. д.
- "Prefix": "Пожалуйста, переведите этот фрагмент на русский язык, а затем создайте пошаговую таблицу в markdown, чтобы объяснить все специализированные термины, которые встречаются в тексте:\n\n",
-
- # Суффикс, будет добавлен после вашего ввода. Например, совместно с префиксом можно обрамить ваш ввод в кавычки.
- "Suffix": "",
-},
-```
-
(.*?)<\/p>'
- agent_matches = re.findall(agent_prefix_pattern, message_clipped)
- final_message = ""
- if agent_matches:
- agent_parts = re.split(agent_prefix_pattern, message_clipped)
- for i, part in enumerate(agent_parts):
- if i % 2 == 0:
- final_message += escape_markdown(part) if need_escape else part
- else:
- final_message += f'
""")
-
- with gr.Accordion("About",open=False):
- gr.Markdown("""
-
Thesis System presented by
- • Daniel L. Espinola
- • Jhon Vincent A. Gupo
- • Ryan M. Ibay
- In partial fulfillment of the requirements for the degree
- Bachelor of Science in Computer Science Specialized in Intelligent Systems
- Laguna State Polytechnic University - Los Baños Campus .
- We would also like to thank our fellow adviser and subject specialist for their guidance in making this idea a reality.
- • Crisanto F. Gulay - Adviser
- • Gene Marck B. Catedrilla - Subject Specialist
-
- """)
- link.change(populate_metadata, inputs=[link], outputs=[img, title])
-
- # Transcription
- transcribe_button1.click(transcribe, inputs=audio, outputs=text_output1)
- transcribe_button2.click(transcribe_file, inputs=file_upload, outputs=text_output2)
- transcribe_button3.click(inference, inputs=link, outputs=text_link_output)
-
- # Gramify
- text_output1.change(gramify,inputs=text_output1,outputs=Grammar_text_output1)
- text_output2.change(gramify,inputs=text_output2,outputs=Grammar_text_output2)
- text_link_output.change(gramify, inputs=text_link_output ,outputs=Grammar_text_output3)
-
- # For Text Difference
- Grammar_text_output1.change(diff_texts,inputs=[text_output1,Grammar_text_output1],outputs=Diff_text_output1)
- Grammar_text_output2.change(diff_texts,inputs=[text_output2,Grammar_text_output2],outputs=Diff_text_output2)
- Grammar_text_output3.change(diff_texts,inputs=[text_link_output,Grammar_text_output3],outputs=Diff_text_output3)
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/actions.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/actions.py
deleted file mode 100644
index f72c66e743146c7a5b70a5440e9ab5459f10245b..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/actions.py
+++ /dev/null
@@ -1,207 +0,0 @@
-# actions.py
-
-from .exceptions import ParseException
-from .util import col
-
-
-class OnlyOnce:
- """
- Wrapper for parse actions, to ensure they are only called once.
- """
-
- def __init__(self, method_call):
- from .core import _trim_arity
-
- self.callable = _trim_arity(method_call)
- self.called = False
-
- def __call__(self, s, l, t):
- if not self.called:
- results = self.callable(s, l, t)
- self.called = True
- return results
- raise ParseException(s, l, "OnlyOnce obj called multiple times w/out reset")
-
- def reset(self):
- """
- Allow the associated parse action to be called once more.
- """
-
- self.called = False
-
-
-def match_only_at_col(n):
- """
- Helper method for defining parse actions that require matching at
- a specific column in the input text.
- """
-
- def verify_col(strg, locn, toks):
- if col(locn, strg) != n:
- raise ParseException(strg, locn, "matched token not at column {}".format(n))
-
- return verify_col
-
-
-def replace_with(repl_str):
- """
- Helper method for common parse actions that simply return
- a literal value. Especially useful when used with
- :class:`transform_string` ().
-
- Example::
-
- num = Word(nums).set_parse_action(lambda toks: int(toks[0]))
- na = one_of("N/A NA").set_parse_action(replace_with(math.nan))
- term = na | num
-
- term[1, ...].parse_string("324 234 N/A 234") # -> [324, 234, nan, 234]
- """
- return lambda s, l, t: [repl_str]
-
-
-def remove_quotes(s, l, t):
- """
- Helper parse action for removing quotation marks from parsed
- quoted strings.
-
- Example::
-
- # by default, quotation marks are included in parsed results
- quoted_string.parse_string("'Now is the Winter of our Discontent'") # -> ["'Now is the Winter of our Discontent'"]
-
- # use remove_quotes to strip quotation marks from parsed results
- quoted_string.set_parse_action(remove_quotes)
- quoted_string.parse_string("'Now is the Winter of our Discontent'") # -> ["Now is the Winter of our Discontent"]
- """
- return t[0][1:-1]
-
-
-def with_attribute(*args, **attr_dict):
- """
- Helper to create a validating parse action to be used with start
- tags created with :class:`make_xml_tags` or
- :class:`make_html_tags`. Use ``with_attribute`` to qualify
- a starting tag with a required attribute value, to avoid false
- matches on common tags such as ``
`` or ``
``.
-
- Call ``with_attribute`` with a series of attribute names and
- values. Specify the list of filter attributes names and values as:
-
- - keyword arguments, as in ``(align="right")``, or
- - as an explicit dict with ``**`` operator, when an attribute
- name is also a Python reserved word, as in ``**{"class":"Customer", "align":"right"}``
- - a list of name-value tuples, as in ``(("ns1:class", "Customer"), ("ns2:align", "right"))``
-
- For attribute names with a namespace prefix, you must use the second
- form. Attribute names are matched insensitive to upper/lower case.
-
- If just testing for ``class`` (with or without a namespace), use
- :class:`with_class`.
-
- To verify that the attribute exists, but without specifying a value,
- pass ``with_attribute.ANY_VALUE`` as the value.
-
- Example::
-
- html = '''
-
- Some text
-
1 4 0 1 0
-
1,3 2,3 1,1
-
this has no type
-
-
- '''
- div,div_end = make_html_tags("div")
-
- # only match div tag having a type attribute with value "grid"
- div_grid = div().set_parse_action(with_attribute(type="grid"))
- grid_expr = div_grid + SkipTo(div | div_end)("body")
- for grid_header in grid_expr.search_string(html):
- print(grid_header.body)
-
- # construct a match with any div tag having a type attribute, regardless of the value
- div_any_type = div().set_parse_action(with_attribute(type=with_attribute.ANY_VALUE))
- div_expr = div_any_type + SkipTo(div | div_end)("body")
- for div_header in div_expr.search_string(html):
- print(div_header.body)
-
- prints::
-
- 1 4 0 1 0
-
- 1 4 0 1 0
- 1,3 2,3 1,1
- """
- if args:
- attrs = args[:]
- else:
- attrs = attr_dict.items()
- attrs = [(k, v) for k, v in attrs]
-
- def pa(s, l, tokens):
- for attrName, attrValue in attrs:
- if attrName not in tokens:
- raise ParseException(s, l, "no matching attribute " + attrName)
- if attrValue != with_attribute.ANY_VALUE and tokens[attrName] != attrValue:
- raise ParseException(
- s,
- l,
- "attribute {!r} has value {!r}, must be {!r}".format(
- attrName, tokens[attrName], attrValue
- ),
- )
-
- return pa
-
-
-with_attribute.ANY_VALUE = object()
-
-
-def with_class(classname, namespace=""):
- """
- Simplified version of :class:`with_attribute` when
- matching on a div class - made difficult because ``class`` is
- a reserved word in Python.
-
- Example::
-
- html = '''
-
- Some text
-
1 4 0 1 0
-
1,3 2,3 1,1
-
this <div> has no class
-
-
- '''
- div,div_end = make_html_tags("div")
- div_grid = div().set_parse_action(with_class("grid"))
-
- grid_expr = div_grid + SkipTo(div | div_end)("body")
- for grid_header in grid_expr.search_string(html):
- print(grid_header.body)
-
- div_any_type = div().set_parse_action(with_class(withAttribute.ANY_VALUE))
- div_expr = div_any_type + SkipTo(div | div_end)("body")
- for div_header in div_expr.search_string(html):
- print(div_header.body)
-
- prints::
-
- 1 4 0 1 0
-
- 1 4 0 1 0
- 1,3 2,3 1,1
- """
- classattr = "{}:class".format(namespace) if namespace else "class"
- return with_attribute(**{classattr: classname})
-
-
-# pre-PEP8 compatibility symbols
-replaceWith = replace_with
-removeQuotes = remove_quotes
-withAttribute = with_attribute
-withClass = with_class
-matchOnlyAtCol = match_only_at_col
diff --git a/spaces/Raspberry-ai/main/raspberry_flagging.py b/spaces/Raspberry-ai/main/raspberry_flagging.py
deleted file mode 100644
index 5d575af8245e239841d8b4b7602990891fdf10de..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/raspberry_flagging.py
+++ /dev/null
@@ -1,184 +0,0 @@
-import csv
-import datetime
-import time
-import io
-import json
-import os
-import sys
-import gradio as gr
-import subprocess
-
-from gradio import encryptor, utils
-from gradio.flagging import FlaggingCallback, _get_dataset_features_info
-from gradio.components import IOComponent
-from typing import TYPE_CHECKING, Any, List, Optional
-
-from huggingface_hub.utils import run_subprocess
-
-
-"""
- This class is forked from https://github.com/gradio-app/gradio/blob/master/gradio/flagging.py
-
-"""
-
-class RaspberryHuggingFaceDatasetSaver(FlaggingCallback):
- """
- A FlaggingCallback that saves flagged data to a HuggingFace dataset.
- """
-
- def __init__(
- self,
- hf_token: str,
- dataset_url: str,
- repo_id: str,
- private: bool = True,
- ):
- """
- Parameters:
- hf_token: The HuggingFace token to use to create (and write the flagged sample to) the HuggingFace dataset.
- dataset_name: The name of the dataset to save the data to, e.g. "image-classifier-1"
- organization: The organization to save the dataset under. The hf_token must provide write access to this organization. If not provided, saved under the name of the user corresponding to the hf_token.
- private: Whether the dataset should be private (defaults to True).
- """
- self.hf_token = hf_token
- self.dataset_url = dataset_url
- self.dataset_name = repo_id
- self.dataset_private = private
- csv.field_size_limit(int(sys.maxsize/10)) # https://stackoverflow.com/questions/15063936/csv-error-field-larger-than-field-limit-131072
-
- def setup(self, components: List[IOComponent], flagging_dir: str):
- """
- Params:
- flagging_dir (str): local directory where the dataset is cloned,
- updated, and pushed from.
- """
- try:
- import huggingface_hub
- except (ImportError, ModuleNotFoundError):
- raise ImportError(
- "Package `huggingface_hub` not found is needed "
- "for HuggingFaceDatasetSaver. Try 'pip install huggingface_hub'."
- )
-
- # Wrap in try-catch ?
- path_to_dataset_repo = huggingface_hub.create_repo(
- repo_id=self.dataset_name,
- token=self.hf_token,
- private=self.dataset_private,
- repo_type="dataset",
- exist_ok=True,
- )
- self.path_to_dataset_repo = path_to_dataset_repo # e.g. "https://huggingface.co/datasets/abidlabs/test-audio-10"
- # self.path_to_dataset_repo = self.dataset_url
- self.components = components
- self.flagging_dir = flagging_dir
- self.dataset_dir = os.path.join(flagging_dir, self.dataset_name)
-
- print('dataset_dir: {} exists: {}'.format(self.dataset_dir, os.path.exists(self.dataset_dir)))
-
- try:
- print("running `git lfs update --force` subprocess")
-
- # Without the git lfs update call, the Repository call below fails with a "Hook already exists: pre-push" error.
- subprocess.run(['git', 'lfs', 'update', '--force'], capture_output=True, text=True)
-
- # In case git lfs update call above fails try the following line.
- # call_result = subprocess.run(['rm', '-rf', '.git/hooks/pre-push'], capture_output=True, text=True)
-
- except subprocess.CalledProcessError as e:
- output = e.output
- print("subprocess output except: ", output)
-
- self.repo = huggingface_hub.Repository(
- local_dir=self.dataset_dir,
- clone_from=self.path_to_dataset_repo,
- repo_type="dataset",
- use_auth_token=self.hf_token,
- )
- self.repo.git_pull(lfs=True)
-
- # Should filename be user-specified?
- self.log_file = os.path.join(self.dataset_dir, "data.csv")
- self.infos_file = os.path.join(self.dataset_dir, "dataset_infos.json")
-
- def _create_dated_directory_path():
- print("Unused method")
- # if dataset_dir_exists:
- # datetime_dir = os.makedirs(os.path.join(time.strftime("/%Y/%m/%d"), self.dataset_dir))
- # print("datetime_dir:", datetime_dir)
- # self.dataset_dir = datetime_dir
-
- def flag(
- self,
- flag_data: List[Any],
- flag_option: Optional[str] = None,
- flag_index: Optional[int] = None,
- username: Optional[str] = None,
- ) -> int:
- print("starting flag()")
- self.repo.git_pull(lfs=True)
- is_new = not os.path.exists(self.log_file)
-
-
- # Gradio source code assumes the flag call always contains the same components and flag data
- # This is not the case for raspberry, for example inference calls can be with or without input images.
- # Below is necessary to account for variable input to be flagged
-
- # self.components = [component for component in self.components if component is not None and component.value is not None]
- # flag_data = [data for data in flag_data if data]
-
- components_size = len(self.components)
- flag_data_size = len(flag_data)
- if components_size != flag_data_size:
- print('Size of components: [{}] must be the same as the size of flagged data [{}]'.format(components_size, flag_data_size))
- else:
- print('Size of components and flagged data are the same: {}'.format(components_size))
-
-
- print("log file is new: ", is_new)
- with open(self.log_file, "a", newline="", encoding="utf-8") as csvfile:
- writer = csv.writer(csvfile)
-
- # File previews for certain input and output types
- infos, file_preview_types, headers = _get_dataset_features_info(
- is_new, self.components
- )
-
- # Generate the headers and dataset_infos
- print("generating headers and dataset_infos")
- if is_new:
- writer.writerow(utils.sanitize_list_for_csv(headers))
-
- # Generate the row corresponding to the flagged sample
- csv_data = []
- for component, sample in zip(self.components, flag_data):
- print("flag data sample:", sample)
- if component.label == "Input Image" and not sample:
- # Skip flagging the input image if it's not set. Deserializing an unset input image breaks.
- continue
-
- save_dir = os.path.join(
- self.dataset_dir,
- utils.strip_invalid_filename_characters(component.label),
- )
- filepath = component.deserialize(sample, save_dir, None)
- csv_data.append(filepath)
- if isinstance(component, tuple(file_preview_types)):
- csv_data.append(
- "{}/resolve/main/{}".format(self.path_to_dataset_repo, filepath)
- )
- csv_data.append(flag_option if flag_option is not None else "")
- print("writing row")
- writer.writerow(utils.sanitize_list_for_csv(csv_data))
-
- if is_new:
- json.dump(infos, open(self.infos_file, "w"))
-
- with open(self.log_file, "r", encoding="utf-8") as csvfile:
- line_count = len([None for row in csv.reader(csvfile)]) - 1
-
- print("pushing to da hub...")
- self.repo.push_to_hub(commit_message="Flagged sample #{}".format(line_count))
- print("...pushed to da hub")
-
- return line_count
\ No newline at end of file
diff --git a/spaces/Rbrq/DeticChatGPT/tools/remove_lvis_rare.py b/spaces/Rbrq/DeticChatGPT/tools/remove_lvis_rare.py
deleted file mode 100644
index 06e4e881bfa50e2cd74747511a3ad2e8676e0c70..0000000000000000000000000000000000000000
--- a/spaces/Rbrq/DeticChatGPT/tools/remove_lvis_rare.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import argparse
-import json
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--ann', default='datasets/lvis/lvis_v1_train.json')
- args = parser.parse_args()
-
- print('Loading', args.ann)
- data = json.load(open(args.ann, 'r'))
- catid2freq = {x['id']: x['frequency'] for x in data['categories']}
- print('ori #anns', len(data['annotations']))
- exclude = ['r']
- data['annotations'] = [x for x in data['annotations'] \
- if catid2freq[x['category_id']] not in exclude]
- print('filtered #anns', len(data['annotations']))
- out_path = args.ann[:-5] + '_norare.json'
- print('Saving to', out_path)
- json.dump(data, open(out_path, 'w'))
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/utils/misc.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/utils/misc.py
deleted file mode 100644
index 2c58d0d7fee9fe3d4519270ad8c1e998d0d8a18c..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/utils/misc.py
+++ /dev/null
@@ -1,377 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import collections.abc
-import functools
-import itertools
-import subprocess
-import warnings
-from collections import abc
-from importlib import import_module
-from inspect import getfullargspec
-from itertools import repeat
-
-
-# From PyTorch internals
-def _ntuple(n):
-
- def parse(x):
- if isinstance(x, collections.abc.Iterable):
- return x
- return tuple(repeat(x, n))
-
- return parse
-
-
-to_1tuple = _ntuple(1)
-to_2tuple = _ntuple(2)
-to_3tuple = _ntuple(3)
-to_4tuple = _ntuple(4)
-to_ntuple = _ntuple
-
-
-def is_str(x):
- """Whether the input is an string instance.
-
- Note: This method is deprecated since python 2 is no longer supported.
- """
- return isinstance(x, str)
-
-
-def import_modules_from_strings(imports, allow_failed_imports=False):
- """Import modules from the given list of strings.
-
- Args:
- imports (list | str | None): The given module names to be imported.
- allow_failed_imports (bool): If True, the failed imports will return
- None. Otherwise, an ImportError is raise. Default: False.
-
- Returns:
- list[module] | module | None: The imported modules.
-
- Examples:
- >>> osp, sys = import_modules_from_strings(
- ... ['os.path', 'sys'])
- >>> import os.path as osp_
- >>> import sys as sys_
- >>> assert osp == osp_
- >>> assert sys == sys_
- """
- if not imports:
- return
- single_import = False
- if isinstance(imports, str):
- single_import = True
- imports = [imports]
- if not isinstance(imports, list):
- raise TypeError(
- f'custom_imports must be a list but got type {type(imports)}')
- imported = []
- for imp in imports:
- if not isinstance(imp, str):
- raise TypeError(
- f'{imp} is of type {type(imp)} and cannot be imported.')
- try:
- imported_tmp = import_module(imp)
- except ImportError:
- if allow_failed_imports:
- warnings.warn(f'{imp} failed to import and is ignored.',
- UserWarning)
- imported_tmp = None
- else:
- raise ImportError
- imported.append(imported_tmp)
- if single_import:
- imported = imported[0]
- return imported
-
-
-def iter_cast(inputs, dst_type, return_type=None):
- """Cast elements of an iterable object into some type.
-
- Args:
- inputs (Iterable): The input object.
- dst_type (type): Destination type.
- return_type (type, optional): If specified, the output object will be
- converted to this type, otherwise an iterator.
-
- Returns:
- iterator or specified type: The converted object.
- """
- if not isinstance(inputs, abc.Iterable):
- raise TypeError('inputs must be an iterable object')
- if not isinstance(dst_type, type):
- raise TypeError('"dst_type" must be a valid type')
-
- out_iterable = map(dst_type, inputs)
-
- if return_type is None:
- return out_iterable
- else:
- return return_type(out_iterable)
-
-
-def list_cast(inputs, dst_type):
- """Cast elements of an iterable object into a list of some type.
-
- A partial method of :func:`iter_cast`.
- """
- return iter_cast(inputs, dst_type, return_type=list)
-
-
-def tuple_cast(inputs, dst_type):
- """Cast elements of an iterable object into a tuple of some type.
-
- A partial method of :func:`iter_cast`.
- """
- return iter_cast(inputs, dst_type, return_type=tuple)
-
-
-def is_seq_of(seq, expected_type, seq_type=None):
- """Check whether it is a sequence of some type.
-
- Args:
- seq (Sequence): The sequence to be checked.
- expected_type (type): Expected type of sequence items.
- seq_type (type, optional): Expected sequence type.
-
- Returns:
- bool: Whether the sequence is valid.
- """
- if seq_type is None:
- exp_seq_type = abc.Sequence
- else:
- assert isinstance(seq_type, type)
- exp_seq_type = seq_type
- if not isinstance(seq, exp_seq_type):
- return False
- for item in seq:
- if not isinstance(item, expected_type):
- return False
- return True
-
-
-def is_list_of(seq, expected_type):
- """Check whether it is a list of some type.
-
- A partial method of :func:`is_seq_of`.
- """
- return is_seq_of(seq, expected_type, seq_type=list)
-
-
-def is_tuple_of(seq, expected_type):
- """Check whether it is a tuple of some type.
-
- A partial method of :func:`is_seq_of`.
- """
- return is_seq_of(seq, expected_type, seq_type=tuple)
-
-
-def slice_list(in_list, lens):
- """Slice a list into several sub lists by a list of given length.
-
- Args:
- in_list (list): The list to be sliced.
- lens(int or list): The expected length of each out list.
-
- Returns:
- list: A list of sliced list.
- """
- if isinstance(lens, int):
- assert len(in_list) % lens == 0
- lens = [lens] * int(len(in_list) / lens)
- if not isinstance(lens, list):
- raise TypeError('"indices" must be an integer or a list of integers')
- elif sum(lens) != len(in_list):
- raise ValueError('sum of lens and list length does not '
- f'match: {sum(lens)} != {len(in_list)}')
- out_list = []
- idx = 0
- for i in range(len(lens)):
- out_list.append(in_list[idx:idx + lens[i]])
- idx += lens[i]
- return out_list
-
-
-def concat_list(in_list):
- """Concatenate a list of list into a single list.
-
- Args:
- in_list (list): The list of list to be merged.
-
- Returns:
- list: The concatenated flat list.
- """
- return list(itertools.chain(*in_list))
-
-
-def check_prerequisites(
- prerequisites,
- checker,
- msg_tmpl='Prerequisites "{}" are required in method "{}" but not '
- 'found, please install them first.'): # yapf: disable
- """A decorator factory to check if prerequisites are satisfied.
-
- Args:
- prerequisites (str of list[str]): Prerequisites to be checked.
- checker (callable): The checker method that returns True if a
- prerequisite is meet, False otherwise.
- msg_tmpl (str): The message template with two variables.
-
- Returns:
- decorator: A specific decorator.
- """
-
- def wrap(func):
-
- @functools.wraps(func)
- def wrapped_func(*args, **kwargs):
- requirements = [prerequisites] if isinstance(
- prerequisites, str) else prerequisites
- missing = []
- for item in requirements:
- if not checker(item):
- missing.append(item)
- if missing:
- print(msg_tmpl.format(', '.join(missing), func.__name__))
- raise RuntimeError('Prerequisites not meet.')
- else:
- return func(*args, **kwargs)
-
- return wrapped_func
-
- return wrap
-
-
-def _check_py_package(package):
- try:
- import_module(package)
- except ImportError:
- return False
- else:
- return True
-
-
-def _check_executable(cmd):
- if subprocess.call(f'which {cmd}', shell=True) != 0:
- return False
- else:
- return True
-
-
-def requires_package(prerequisites):
- """A decorator to check if some python packages are installed.
-
- Example:
- >>> @requires_package('numpy')
- >>> func(arg1, args):
- >>> return numpy.zeros(1)
- array([0.])
- >>> @requires_package(['numpy', 'non_package'])
- >>> func(arg1, args):
- >>> return numpy.zeros(1)
- ImportError
- """
- return check_prerequisites(prerequisites, checker=_check_py_package)
-
-
-def requires_executable(prerequisites):
- """A decorator to check if some executable files are installed.
-
- Example:
- >>> @requires_executable('ffmpeg')
- >>> func(arg1, args):
- >>> print(1)
- 1
- """
- return check_prerequisites(prerequisites, checker=_check_executable)
-
-
-def deprecated_api_warning(name_dict, cls_name=None):
- """A decorator to check if some arguments are deprecate and try to replace
- deprecate src_arg_name to dst_arg_name.
-
- Args:
- name_dict(dict):
- key (str): Deprecate argument names.
- val (str): Expected argument names.
-
- Returns:
- func: New function.
- """
-
- def api_warning_wrapper(old_func):
-
- @functools.wraps(old_func)
- def new_func(*args, **kwargs):
- # get the arg spec of the decorated method
- args_info = getfullargspec(old_func)
- # get name of the function
- func_name = old_func.__name__
- if cls_name is not None:
- func_name = f'{cls_name}.{func_name}'
- if args:
- arg_names = args_info.args[:len(args)]
- for src_arg_name, dst_arg_name in name_dict.items():
- if src_arg_name in arg_names:
- warnings.warn(
- f'"{src_arg_name}" is deprecated in '
- f'`{func_name}`, please use "{dst_arg_name}" '
- 'instead')
- arg_names[arg_names.index(src_arg_name)] = dst_arg_name
- if kwargs:
- for src_arg_name, dst_arg_name in name_dict.items():
- if src_arg_name in kwargs:
-
- assert dst_arg_name not in kwargs, (
- f'The expected behavior is to replace '
- f'the deprecated key `{src_arg_name}` to '
- f'new key `{dst_arg_name}`, but got them '
- f'in the arguments at the same time, which '
- f'is confusing. `{src_arg_name} will be '
- f'deprecated in the future, please '
- f'use `{dst_arg_name}` instead.')
-
- warnings.warn(
- f'"{src_arg_name}" is deprecated in '
- f'`{func_name}`, please use "{dst_arg_name}" '
- 'instead')
- kwargs[dst_arg_name] = kwargs.pop(src_arg_name)
-
- # apply converted arguments to the decorated method
- output = old_func(*args, **kwargs)
- return output
-
- return new_func
-
- return api_warning_wrapper
-
-
-def is_method_overridden(method, base_class, derived_class):
- """Check if a method of base class is overridden in derived class.
-
- Args:
- method (str): the method name to check.
- base_class (type): the class of the base class.
- derived_class (type | Any): the class or instance of the derived class.
- """
- assert isinstance(base_class, type), \
- "base_class doesn't accept instance, Please pass class instead."
-
- if not isinstance(derived_class, type):
- derived_class = derived_class.__class__
-
- base_method = getattr(base_class, method)
- derived_method = getattr(derived_class, method)
- return derived_method != base_method
-
-
-def has_method(obj: object, method: str) -> bool:
- """Check whether the object has a method.
-
- Args:
- method (str): The method name to check.
- obj (object): The object to check.
-
- Returns:
- bool: True if the object has the method else False.
- """
- return hasattr(obj, method) and callable(getattr(obj, method))
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/two_stage.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/two_stage.py
deleted file mode 100644
index ba5bdde980dc0cd76375455c9c7ffaae4b25531e..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/two_stage.py
+++ /dev/null
@@ -1,215 +0,0 @@
-import torch
-import torch.nn as nn
-
-# from mmdet.core import bbox2result, bbox2roi, build_assigner, build_sampler
-from ..builder import DETECTORS, build_backbone, build_head, build_neck
-from .base import BaseDetector
-
-
-@DETECTORS.register_module()
-class TwoStageDetector(BaseDetector):
- """Base class for two-stage detectors.
-
- Two-stage detectors typically consisting of a region proposal network and a
- task-specific regression head.
- """
-
- def __init__(self,
- backbone,
- neck=None,
- rpn_head=None,
- roi_head=None,
- train_cfg=None,
- test_cfg=None,
- pretrained=None):
- super(TwoStageDetector, self).__init__()
- self.backbone = build_backbone(backbone)
-
- if neck is not None:
- self.neck = build_neck(neck)
-
- if rpn_head is not None:
- rpn_train_cfg = train_cfg.rpn if train_cfg is not None else None
- rpn_head_ = rpn_head.copy()
- rpn_head_.update(train_cfg=rpn_train_cfg, test_cfg=test_cfg.rpn)
- self.rpn_head = build_head(rpn_head_)
-
- if roi_head is not None:
- # update train and test cfg here for now
- # TODO: refactor assigner & sampler
- rcnn_train_cfg = train_cfg.rcnn if train_cfg is not None else None
- roi_head.update(train_cfg=rcnn_train_cfg)
- roi_head.update(test_cfg=test_cfg.rcnn)
- self.roi_head = build_head(roi_head)
-
- self.train_cfg = train_cfg
- self.test_cfg = test_cfg
-
- self.init_weights(pretrained=pretrained)
-
- @property
- def with_rpn(self):
- """bool: whether the detector has RPN"""
- return hasattr(self, 'rpn_head') and self.rpn_head is not None
-
- @property
- def with_roi_head(self):
- """bool: whether the detector has a RoI head"""
- return hasattr(self, 'roi_head') and self.roi_head is not None
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in detector.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
- super(TwoStageDetector, self).init_weights(pretrained)
- self.backbone.init_weights(pretrained=pretrained)
- if self.with_neck:
- if isinstance(self.neck, nn.Sequential):
- for m in self.neck:
- m.init_weights()
- else:
- self.neck.init_weights()
- if self.with_rpn:
- self.rpn_head.init_weights()
- if self.with_roi_head:
- self.roi_head.init_weights(pretrained)
-
- def extract_feat(self, img):
- """Directly extract features from the backbone+neck."""
- x = self.backbone(img)
- if self.with_neck:
- x = self.neck(x)
- return x
-
- def forward_dummy(self, img):
- """Used for computing network flops.
-
- See `mmdetection/tools/analysis_tools/get_flops.py`
- """
- outs = ()
- # backbone
- x = self.extract_feat(img)
- # rpn
- if self.with_rpn:
- rpn_outs = self.rpn_head(x)
- outs = outs + (rpn_outs, )
- proposals = torch.randn(1000, 4).to(img.device)
- # roi_head
- roi_outs = self.roi_head.forward_dummy(x, proposals)
- outs = outs + (roi_outs, )
- return outs
-
- def forward_train(self,
- img,
- img_metas,
- gt_bboxes,
- gt_labels,
- gt_bboxes_ignore=None,
- gt_masks=None,
- proposals=None,
- **kwargs):
- """
- Args:
- img (Tensor): of shape (N, C, H, W) encoding input images.
- Typically these should be mean centered and std scaled.
-
- img_metas (list[dict]): list of image info dict where each dict
- has: 'img_shape', 'scale_factor', 'flip', and may also contain
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
- For details on the values of these keys see
- `mmdet/datasets/pipelines/formatting.py:Collect`.
-
- gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
-
- gt_labels (list[Tensor]): class indices corresponding to each box
-
- gt_bboxes_ignore (None | list[Tensor]): specify which bounding
- boxes can be ignored when computing the loss.
-
- gt_masks (None | Tensor) : true segmentation masks for each box
- used if the architecture supports a segmentation task.
-
- proposals : override rpn proposals with custom proposals. Use when
- `with_rpn` is False.
-
- Returns:
- dict[str, Tensor]: a dictionary of loss components
- """
- x = self.extract_feat(img)
-
- losses = dict()
-
- # RPN forward and loss
- if self.with_rpn:
- proposal_cfg = self.train_cfg.get('rpn_proposal',
- self.test_cfg.rpn)
- rpn_losses, proposal_list = self.rpn_head.forward_train(
- x,
- img_metas,
- gt_bboxes,
- gt_labels=None,
- gt_bboxes_ignore=gt_bboxes_ignore,
- proposal_cfg=proposal_cfg)
- losses.update(rpn_losses)
- else:
- proposal_list = proposals
-
- roi_losses = self.roi_head.forward_train(x, img_metas, proposal_list,
- gt_bboxes, gt_labels,
- gt_bboxes_ignore, gt_masks,
- **kwargs)
- losses.update(roi_losses)
-
- return losses
-
- async def async_simple_test(self,
- img,
- img_meta,
- proposals=None,
- rescale=False):
- """Async test without augmentation."""
- assert self.with_bbox, 'Bbox head must be implemented.'
- x = self.extract_feat(img)
-
- if proposals is None:
- proposal_list = await self.rpn_head.async_simple_test_rpn(
- x, img_meta)
- else:
- proposal_list = proposals
-
- return await self.roi_head.async_simple_test(
- x, proposal_list, img_meta, rescale=rescale)
-
- def simple_test(self, img, img_metas, proposals=None, rescale=False):
- """Test without augmentation."""
- assert self.with_bbox, 'Bbox head must be implemented.'
-
- x = self.extract_feat(img)
-
- # get origin input shape to onnx dynamic input shape
- if torch.onnx.is_in_onnx_export():
- img_shape = torch._shape_as_tensor(img)[2:]
- img_metas[0]['img_shape_for_onnx'] = img_shape
-
- if proposals is None:
- proposal_list = self.rpn_head.simple_test_rpn(x, img_metas)
- else:
- proposal_list = proposals
-
- return self.roi_head.simple_test(
- x, proposal_list, img_metas, rescale=rescale)
-
- def aug_test(self, imgs, img_metas, rescale=False):
- """Test with augmentations.
-
- If rescale is False, then returned bboxes and masks will fit the scale
- of imgs[0].
- """
- x = self.extract_feats(imgs)
- proposal_list = self.rpn_head.aug_test_rpn(x, img_metas)
- return self.roi_head.aug_test(
- x, proposal_list, img_metas, rescale=rescale)
diff --git a/spaces/Rongjiehuang/ProDiff/utils/common_schedulers.py b/spaces/Rongjiehuang/ProDiff/utils/common_schedulers.py
deleted file mode 100644
index 41c6f4a9250b2d5954ce93cb7c04e7b55025cb51..0000000000000000000000000000000000000000
--- a/spaces/Rongjiehuang/ProDiff/utils/common_schedulers.py
+++ /dev/null
@@ -1,50 +0,0 @@
-from utils.hparams import hparams
-
-
-class NoneSchedule(object):
- def __init__(self, optimizer):
- super().__init__()
- self.optimizer = optimizer
- self.constant_lr = hparams['lr']
- self.step(0)
-
- def step(self, num_updates):
- self.lr = self.constant_lr
- for param_group in self.optimizer.param_groups:
- param_group['lr'] = self.lr
- return self.lr
-
- def get_lr(self):
- return self.optimizer.param_groups[0]['lr']
-
- def get_last_lr(self):
- return self.get_lr()
-
-
-class RSQRTSchedule(object):
- def __init__(self, optimizer):
- super().__init__()
- self.optimizer = optimizer
- self.constant_lr = hparams['lr']
- self.warmup_updates = hparams['warmup_updates']
- self.hidden_size = hparams['hidden_size']
- self.lr = hparams['lr']
- for param_group in optimizer.param_groups:
- param_group['lr'] = self.lr
- self.step(0)
-
- def step(self, num_updates):
- constant_lr = self.constant_lr
- warmup = min(num_updates / self.warmup_updates, 1.0)
- rsqrt_decay = max(self.warmup_updates, num_updates) ** -0.5
- rsqrt_hidden = self.hidden_size ** -0.5
- self.lr = max(constant_lr * warmup * rsqrt_decay * rsqrt_hidden, 1e-7)
- for param_group in self.optimizer.param_groups:
- param_group['lr'] = self.lr
- return self.lr
-
- def get_lr(self):
- return self.optimizer.param_groups[0]['lr']
-
- def get_last_lr(self):
- return self.get_lr()
diff --git a/spaces/Ryukijano/canny_coyo1m/README.md b/spaces/Ryukijano/canny_coyo1m/README.md
deleted file mode 100644
index 671348306a897f06d5cef95dc9fa3f13c5b5f697..0000000000000000000000000000000000000000
--- a/spaces/Ryukijano/canny_coyo1m/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Canny Coyo1m
-emoji: 🌖
-colorFrom: indigo
-colorTo: green
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: jax-diffusers-event/canny_coyo1m
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/SAAZIZI/SummarizeAV/keyword_retriever/keyword_retreiver.py b/spaces/SAAZIZI/SummarizeAV/keyword_retriever/keyword_retreiver.py
deleted file mode 100644
index 21c9a0ce2239f47505122c7321a2ade7fa2a0960..0000000000000000000000000000000000000000
--- a/spaces/SAAZIZI/SummarizeAV/keyword_retriever/keyword_retreiver.py
+++ /dev/null
@@ -1,66 +0,0 @@
-import json
-import os
-import time
-
-import chromadb
-from llama_index import (ServiceContext, StorageContext, VectorStoreIndex, )
-from llama_index.embeddings import HuggingFaceEmbedding
-from llama_index.schema import Document
-from llama_index.vector_stores import ChromaVectorStore
-
-import config
-from logger import logger
-
-
-class MediaRetriever:
- def __init__(self, media_id, similarity_top_k=5):
- self.media_id = media_id
- self.similarity_top_k = similarity_top_k
-
- self._initialize_retriever()
-
- def _initialize_retriever(self):
- docs = self._load_documents()
-
- # Create client and a new collection
- chroma_client = chromadb.EphemeralClient()
- try:
- chroma_collection = chroma_client.create_collection(f"quickstart-{time.time()}")
- except Exception as e:
- logger.error(f"Exception encountered: {e}")
- chroma_collection = None
-
- # Define embedding function
- embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-base-en-v1.5")
-
- # Set up ChromaVectorStore and load in data
- if chroma_collection is not None:
- vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
- else:
- logger.error("chroma_collection is not initialized.") # handle this case
-
- storage_context = StorageContext.from_defaults(vector_store=vector_store)
- service_context = ServiceContext.from_defaults(embed_model=embed_model)
-
- logger.info("Start indexing transcription")
- self.index = VectorStoreIndex.from_documents(docs, storage_context=storage_context,
- service_context=service_context, show_progress=True)
- logger.info("End indexing transcription")
-
- self.retreiver = self.index.as_retriever(similarity_top_k=self.similarity_top_k)
-
- def _load_documents(self):
- with open(os.path.join(config.output_path_transcription, f"{self.media_id}.json"), "r") as f:
- json_data = json.load(f)
-
- documents = []
- for segment in json_data["segments"]:
- text = segment["text"]
- start = segment["start"]
- metadata = {"start": start}
- documents.append(Document(text=text, metadata=metadata))
- return documents
-
- def search(self, query):
- response = self.retreiver.retrieve(query)
- return response
diff --git a/spaces/Salesforce/BLIP/train_vqa.py b/spaces/Salesforce/BLIP/train_vqa.py
deleted file mode 100644
index 89eb7490862e517cc660f842396033c21d441a20..0000000000000000000000000000000000000000
--- a/spaces/Salesforce/BLIP/train_vqa.py
+++ /dev/null
@@ -1,202 +0,0 @@
-'''
- * Copyright (c) 2022, salesforce.com, inc.
- * All rights reserved.
- * SPDX-License-Identifier: BSD-3-Clause
- * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause
- * By Junnan Li
-'''
-import argparse
-import os
-import ruamel_yaml as yaml
-import numpy as np
-import random
-import time
-import datetime
-import json
-from pathlib import Path
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.utils.data import DataLoader
-import torch.backends.cudnn as cudnn
-import torch.distributed as dist
-
-from models.blip_vqa import blip_vqa
-import utils
-from utils import cosine_lr_schedule
-from data import create_dataset, create_sampler, create_loader
-from data.vqa_dataset import vqa_collate_fn
-from data.utils import save_result
-
-
-def train(model, data_loader, optimizer, epoch, device):
- # train
- model.train()
-
- metric_logger = utils.MetricLogger(delimiter=" ")
- metric_logger.add_meter('lr', utils.SmoothedValue(window_size=1, fmt='{value:.6f}'))
- metric_logger.add_meter('loss', utils.SmoothedValue(window_size=1, fmt='{value:.4f}'))
-
- header = 'Train Epoch: [{}]'.format(epoch)
- print_freq = 50
-
- for i,(image, question, answer, weights, n) in enumerate(metric_logger.log_every(data_loader, print_freq, header)):
- image, weights = image.to(device,non_blocking=True), weights.to(device,non_blocking=True)
-
- loss = model(image, question, answer, train=True, n=n, weights=weights)
-
- optimizer.zero_grad()
- loss.backward()
- optimizer.step()
-
- metric_logger.update(loss=loss.item())
- metric_logger.update(lr=optimizer.param_groups[0]["lr"])
-
- # gather the stats from all processes
- metric_logger.synchronize_between_processes()
- print("Averaged stats:", metric_logger.global_avg())
- return {k: "{:.3f}".format(meter.global_avg) for k, meter in metric_logger.meters.items()}
-
-
-@torch.no_grad()
-def evaluation(model, data_loader, device, config) :
- # test
- model.eval()
-
- metric_logger = utils.MetricLogger(delimiter=" ")
- header = 'Generate VQA test result:'
- print_freq = 50
-
- result = []
-
- if config['inference']=='rank':
- answer_list = data_loader.dataset.answer_list
- answer_candidates = model.tokenizer(answer_list, padding='longest', return_tensors='pt').to(device)
- answer_candidates.input_ids[:,0] = model.tokenizer.bos_token_id
-
- for n, (image, question, question_id) in enumerate(metric_logger.log_every(data_loader, print_freq, header)):
- image = image.to(device,non_blocking=True)
-
- if config['inference']=='generate':
- answers = model(image, question, train=False, inference='generate')
-
- for answer, ques_id in zip(answers, question_id):
- ques_id = int(ques_id.item())
- result.append({"question_id":ques_id, "answer":answer})
-
- elif config['inference']=='rank':
- answer_ids = model(image, question, answer_candidates, train=False, inference='rank', k_test=config['k_test'])
-
- for ques_id, answer_id in zip(question_id, answer_ids):
- result.append({"question_id":int(ques_id.item()), "answer":answer_list[answer_id]})
-
- return result
-
-
-def main(args, config):
- utils.init_distributed_mode(args)
-
- device = torch.device(args.device)
-
- # fix the seed for reproducibility
- seed = args.seed + utils.get_rank()
- torch.manual_seed(seed)
- np.random.seed(seed)
- random.seed(seed)
- cudnn.benchmark = True
-
- #### Dataset ####
- print("Creating vqa datasets")
- datasets = create_dataset('vqa', config)
-
- if args.distributed:
- num_tasks = utils.get_world_size()
- global_rank = utils.get_rank()
- samplers = create_sampler(datasets, [True, False], num_tasks, global_rank)
- else:
- samplers = [None, None]
-
- train_loader, test_loader = create_loader(datasets,samplers,
- batch_size=[config['batch_size_train'],config['batch_size_test']],
- num_workers=[4,4],is_trains=[True, False],
- collate_fns=[vqa_collate_fn,None])
- #### Model ####
- print("Creating model")
- model = blip_vqa(pretrained=config['pretrained'], image_size=config['image_size'],
- vit=config['vit'], vit_grad_ckpt=config['vit_grad_ckpt'], vit_ckpt_layer=config['vit_ckpt_layer'])
-
- model = model.to(device)
-
- model_without_ddp = model
- if args.distributed:
- model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])
- model_without_ddp = model.module
-
- optimizer = torch.optim.AdamW(params=model.parameters(), lr=config['init_lr'], weight_decay=config['weight_decay'])
-
- best = 0
- best_epoch = 0
-
- print("Start training")
- start_time = time.time()
- for epoch in range(0, config['max_epoch']):
- if not args.evaluate:
- if args.distributed:
- train_loader.sampler.set_epoch(epoch)
-
- cosine_lr_schedule(optimizer, epoch, config['max_epoch'], config['init_lr'], config['min_lr'])
-
- train_stats = train(model, train_loader, optimizer, epoch, device)
-
- else:
- break
-
- if utils.is_main_process():
- log_stats = {**{f'train_{k}': v for k, v in train_stats.items()},
- 'epoch': epoch,
- }
- with open(os.path.join(args.output_dir, "log.txt"),"a") as f:
- f.write(json.dumps(log_stats) + "\n")
-
- save_obj = {
- 'model': model_without_ddp.state_dict(),
- 'optimizer': optimizer.state_dict(),
- 'config': config,
- 'epoch': epoch,
- }
- torch.save(save_obj, os.path.join(args.output_dir, 'checkpoint_%02d.pth'%epoch))
-
- dist.barrier()
-
- vqa_result = evaluation(model_without_ddp, test_loader, device, config)
- result_file = save_result(vqa_result, args.result_dir, 'vqa_result')
-
- total_time = time.time() - start_time
- total_time_str = str(datetime.timedelta(seconds=int(total_time)))
- print('Training time {}'.format(total_time_str))
-
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--config', default='./configs/vqa.yaml')
- parser.add_argument('--output_dir', default='output/VQA')
- parser.add_argument('--evaluate', action='store_true')
- parser.add_argument('--device', default='cuda')
- parser.add_argument('--seed', default=42, type=int)
- parser.add_argument('--world_size', default=1, type=int, help='number of distributed processes')
- parser.add_argument('--dist_url', default='env://', help='url used to set up distributed training')
- parser.add_argument('--distributed', default=True, type=bool)
- args = parser.parse_args()
-
- config = yaml.load(open(args.config, 'r'), Loader=yaml.Loader)
-
- args.result_dir = os.path.join(args.output_dir, 'result')
-
- Path(args.output_dir).mkdir(parents=True, exist_ok=True)
- Path(args.result_dir).mkdir(parents=True, exist_ok=True)
-
- yaml.dump(config, open(os.path.join(args.output_dir, 'config.yaml'), 'w'))
-
- main(args, config)
\ No newline at end of file
diff --git a/spaces/SamerKharboush/chatGPT-Sam-Turbo/run_macOS.command b/spaces/SamerKharboush/chatGPT-Sam-Turbo/run_macOS.command
deleted file mode 100644
index 62af07283093d8e580763d7acfe493c3d88e7b08..0000000000000000000000000000000000000000
--- a/spaces/SamerKharboush/chatGPT-Sam-Turbo/run_macOS.command
+++ /dev/null
@@ -1,25 +0,0 @@
-#!/bin/bash
-
-# 获取脚本所在目录
-script_dir=$(dirname "$0")
-
-# 将工作目录更改为脚本所在目录
-cd "$script_dir"
-
-# 检查Git仓库是否有更新
-git remote update
-pwd
-
-if ! git status -uno | grep 'up to date' > /dev/null; then
- # 如果有更新,关闭当前运行的服务器
- pkill -f ChuanhuChatbot.py
-
- # 拉取最新更改
- git pull
-
- # 安装依赖
- pip3 install -r requirements.txt
-
- # 重新启动服务器
- nohup python3 ChuanhuChatbot.py &
-fi
diff --git a/spaces/Senpaisora6/dreambooth-training/convertosd.py b/spaces/Senpaisora6/dreambooth-training/convertosd.py
deleted file mode 100644
index b242edb1de11ad551b3c7ad98f5689fef2c3321a..0000000000000000000000000000000000000000
--- a/spaces/Senpaisora6/dreambooth-training/convertosd.py
+++ /dev/null
@@ -1,223 +0,0 @@
-# Script for converting a HF Diffusers saved pipeline to a Stable Diffusion checkpoint.
-# *Only* converts the UNet, VAE, and Text Encoder.
-# Does not convert optimizer state or any other thing.
-# Written by jachiam
-
-import argparse
-import os.path as osp
-
-import torch
-
-
-# =================#
-# UNet Conversion #
-# =================#
-
-unet_conversion_map = [
- # (stable-diffusion, HF Diffusers)
- ("time_embed.0.weight", "time_embedding.linear_1.weight"),
- ("time_embed.0.bias", "time_embedding.linear_1.bias"),
- ("time_embed.2.weight", "time_embedding.linear_2.weight"),
- ("time_embed.2.bias", "time_embedding.linear_2.bias"),
- ("input_blocks.0.0.weight", "conv_in.weight"),
- ("input_blocks.0.0.bias", "conv_in.bias"),
- ("out.0.weight", "conv_norm_out.weight"),
- ("out.0.bias", "conv_norm_out.bias"),
- ("out.2.weight", "conv_out.weight"),
- ("out.2.bias", "conv_out.bias"),
-]
-
-unet_conversion_map_resnet = [
- # (stable-diffusion, HF Diffusers)
- ("in_layers.0", "norm1"),
- ("in_layers.2", "conv1"),
- ("out_layers.0", "norm2"),
- ("out_layers.3", "conv2"),
- ("emb_layers.1", "time_emb_proj"),
- ("skip_connection", "conv_shortcut"),
-]
-
-unet_conversion_map_layer = []
-# hardcoded number of downblocks and resnets/attentions...
-# would need smarter logic for other networks.
-for i in range(4):
- # loop over downblocks/upblocks
-
- for j in range(2):
- # loop over resnets/attentions for downblocks
- hf_down_res_prefix = f"down_blocks.{i}.resnets.{j}."
- sd_down_res_prefix = f"input_blocks.{3*i + j + 1}.0."
- unet_conversion_map_layer.append((sd_down_res_prefix, hf_down_res_prefix))
-
- if i < 3:
- # no attention layers in down_blocks.3
- hf_down_atn_prefix = f"down_blocks.{i}.attentions.{j}."
- sd_down_atn_prefix = f"input_blocks.{3*i + j + 1}.1."
- unet_conversion_map_layer.append((sd_down_atn_prefix, hf_down_atn_prefix))
-
- for j in range(3):
- # loop over resnets/attentions for upblocks
- hf_up_res_prefix = f"up_blocks.{i}.resnets.{j}."
- sd_up_res_prefix = f"output_blocks.{3*i + j}.0."
- unet_conversion_map_layer.append((sd_up_res_prefix, hf_up_res_prefix))
-
- if i > 0:
- # no attention layers in up_blocks.0
- hf_up_atn_prefix = f"up_blocks.{i}.attentions.{j}."
- sd_up_atn_prefix = f"output_blocks.{3*i + j}.1."
- unet_conversion_map_layer.append((sd_up_atn_prefix, hf_up_atn_prefix))
-
- if i < 3:
- # no downsample in down_blocks.3
- hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0.conv."
- sd_downsample_prefix = f"input_blocks.{3*(i+1)}.0.op."
- unet_conversion_map_layer.append((sd_downsample_prefix, hf_downsample_prefix))
-
- # no upsample in up_blocks.3
- hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0."
- sd_upsample_prefix = f"output_blocks.{3*i + 2}.{1 if i == 0 else 2}."
- unet_conversion_map_layer.append((sd_upsample_prefix, hf_upsample_prefix))
-
-hf_mid_atn_prefix = "mid_block.attentions.0."
-sd_mid_atn_prefix = "middle_block.1."
-unet_conversion_map_layer.append((sd_mid_atn_prefix, hf_mid_atn_prefix))
-
-for j in range(2):
- hf_mid_res_prefix = f"mid_block.resnets.{j}."
- sd_mid_res_prefix = f"middle_block.{2*j}."
- unet_conversion_map_layer.append((sd_mid_res_prefix, hf_mid_res_prefix))
-
-
-def convert_unet_state_dict(unet_state_dict):
- # buyer beware: this is a *brittle* function,
- # and correct output requires that all of these pieces interact in
- # the exact order in which I have arranged them.
- mapping = {k: k for k in unet_state_dict.keys()}
- for sd_name, hf_name in unet_conversion_map:
- mapping[hf_name] = sd_name
- for k, v in mapping.items():
- if "resnets" in k:
- for sd_part, hf_part in unet_conversion_map_resnet:
- v = v.replace(hf_part, sd_part)
- mapping[k] = v
- for k, v in mapping.items():
- for sd_part, hf_part in unet_conversion_map_layer:
- v = v.replace(hf_part, sd_part)
- mapping[k] = v
- new_state_dict = {v: unet_state_dict[k] for k, v in mapping.items()}
- return new_state_dict
-
-
-# ================#
-# VAE Conversion #
-# ================#
-
-vae_conversion_map = [
- # (stable-diffusion, HF Diffusers)
- ("nin_shortcut", "conv_shortcut"),
- ("norm_out", "conv_norm_out"),
- ("mid.attn_1.", "mid_block.attentions.0."),
-]
-
-for i in range(4):
- # down_blocks have two resnets
- for j in range(2):
- hf_down_prefix = f"encoder.down_blocks.{i}.resnets.{j}."
- sd_down_prefix = f"encoder.down.{i}.block.{j}."
- vae_conversion_map.append((sd_down_prefix, hf_down_prefix))
-
- if i < 3:
- hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0."
- sd_downsample_prefix = f"down.{i}.downsample."
- vae_conversion_map.append((sd_downsample_prefix, hf_downsample_prefix))
-
- hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0."
- sd_upsample_prefix = f"up.{3-i}.upsample."
- vae_conversion_map.append((sd_upsample_prefix, hf_upsample_prefix))
-
- # up_blocks have three resnets
- # also, up blocks in hf are numbered in reverse from sd
- for j in range(3):
- hf_up_prefix = f"decoder.up_blocks.{i}.resnets.{j}."
- sd_up_prefix = f"decoder.up.{3-i}.block.{j}."
- vae_conversion_map.append((sd_up_prefix, hf_up_prefix))
-
-# this part accounts for mid blocks in both the encoder and the decoder
-for i in range(2):
- hf_mid_res_prefix = f"mid_block.resnets.{i}."
- sd_mid_res_prefix = f"mid.block_{i+1}."
- vae_conversion_map.append((sd_mid_res_prefix, hf_mid_res_prefix))
-
-
-vae_conversion_map_attn = [
- # (stable-diffusion, HF Diffusers)
- ("norm.", "group_norm."),
- ("q.", "query."),
- ("k.", "key."),
- ("v.", "value."),
- ("proj_out.", "proj_attn."),
-]
-
-
-def reshape_weight_for_sd(w):
- # convert HF linear weights to SD conv2d weights
- return w.reshape(*w.shape, 1, 1)
-
-
-def convert_vae_state_dict(vae_state_dict):
- mapping = {k: k for k in vae_state_dict.keys()}
- for k, v in mapping.items():
- for sd_part, hf_part in vae_conversion_map:
- v = v.replace(hf_part, sd_part)
- mapping[k] = v
- for k, v in mapping.items():
- if "attentions" in k:
- for sd_part, hf_part in vae_conversion_map_attn:
- v = v.replace(hf_part, sd_part)
- mapping[k] = v
- new_state_dict = {v: vae_state_dict[k] for k, v in mapping.items()}
- weights_to_convert = ["q", "k", "v", "proj_out"]
- print("[1;32mConverting to CKPT ...")
- for k, v in new_state_dict.items():
- for weight_name in weights_to_convert:
- if f"mid.attn_1.{weight_name}.weight" in k:
- new_state_dict[k] = reshape_weight_for_sd(v)
- return new_state_dict
-
-
-# =========================#
-# Text Encoder Conversion #
-# =========================#
-# pretty much a no-op
-
-
-def convert_text_enc_state_dict(text_enc_dict):
- return text_enc_dict
-
-
-def convert(model_path, checkpoint_path):
- unet_path = osp.join(model_path, "unet", "diffusion_pytorch_model.bin")
- vae_path = osp.join(model_path, "vae", "diffusion_pytorch_model.bin")
- text_enc_path = osp.join(model_path, "text_encoder", "pytorch_model.bin")
-
- # Convert the UNet model
- unet_state_dict = torch.load(unet_path, map_location='cpu')
- unet_state_dict = convert_unet_state_dict(unet_state_dict)
- unet_state_dict = {"model.diffusion_model." + k: v for k, v in unet_state_dict.items()}
-
- # Convert the VAE model
- vae_state_dict = torch.load(vae_path, map_location='cpu')
- vae_state_dict = convert_vae_state_dict(vae_state_dict)
- vae_state_dict = {"first_stage_model." + k: v for k, v in vae_state_dict.items()}
-
- # Convert the text encoder model
- text_enc_dict = torch.load(text_enc_path, map_location='cpu')
- text_enc_dict = convert_text_enc_state_dict(text_enc_dict)
- text_enc_dict = {"cond_stage_model.transformer." + k: v for k, v in text_enc_dict.items()}
-
- # Put together new checkpoint
- state_dict = {**unet_state_dict, **vae_state_dict, **text_enc_dict}
-
- state_dict = {k:v.half() for k,v in state_dict.items()}
- state_dict = {"state_dict": state_dict}
- torch.save(state_dict, checkpoint_path)
diff --git a/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/util/utils.py b/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/util/utils.py
deleted file mode 100644
index e9f0318e306fa04bff0ada70486b41aaa69b07c8..0000000000000000000000000000000000000000
--- a/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/util/utils.py
+++ /dev/null
@@ -1,608 +0,0 @@
-import argparse
-import json
-import warnings
-from collections import OrderedDict
-from copy import deepcopy
-from typing import Any, Dict, List
-
-import numpy as np
-import torch
-from transformers import AutoTokenizer
-
-from groundingdino.util.slconfig import SLConfig
-
-
-def slprint(x, name="x"):
- if isinstance(x, (torch.Tensor, np.ndarray)):
- print(f"{name}.shape:", x.shape)
- elif isinstance(x, (tuple, list)):
- print("type x:", type(x))
- for i in range(min(10, len(x))):
- slprint(x[i], f"{name}[{i}]")
- elif isinstance(x, dict):
- for k, v in x.items():
- slprint(v, f"{name}[{k}]")
- else:
- print(f"{name}.type:", type(x))
-
-
-def clean_state_dict(state_dict):
- new_state_dict = OrderedDict()
- for k, v in state_dict.items():
- if k[:7] == "module.":
- k = k[7:] # remove `module.`
- new_state_dict[k] = v
- return new_state_dict
-
-
-def renorm(
- img: torch.FloatTensor, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
-) -> torch.FloatTensor:
- # img: tensor(3,H,W) or tensor(B,3,H,W)
- # return: same as img
- assert img.dim() == 3 or img.dim() == 4, "img.dim() should be 3 or 4 but %d" % img.dim()
- if img.dim() == 3:
- assert img.size(0) == 3, 'img.size(0) shoule be 3 but "%d". (%s)' % (
- img.size(0),
- str(img.size()),
- )
- img_perm = img.permute(1, 2, 0)
- mean = torch.Tensor(mean)
- std = torch.Tensor(std)
- img_res = img_perm * std + mean
- return img_res.permute(2, 0, 1)
- else: # img.dim() == 4
- assert img.size(1) == 3, 'img.size(1) shoule be 3 but "%d". (%s)' % (
- img.size(1),
- str(img.size()),
- )
- img_perm = img.permute(0, 2, 3, 1)
- mean = torch.Tensor(mean)
- std = torch.Tensor(std)
- img_res = img_perm * std + mean
- return img_res.permute(0, 3, 1, 2)
-
-
-class CocoClassMapper:
- def __init__(self) -> None:
- self.category_map_str = {
- "1": 1,
- "2": 2,
- "3": 3,
- "4": 4,
- "5": 5,
- "6": 6,
- "7": 7,
- "8": 8,
- "9": 9,
- "10": 10,
- "11": 11,
- "13": 12,
- "14": 13,
- "15": 14,
- "16": 15,
- "17": 16,
- "18": 17,
- "19": 18,
- "20": 19,
- "21": 20,
- "22": 21,
- "23": 22,
- "24": 23,
- "25": 24,
- "27": 25,
- "28": 26,
- "31": 27,
- "32": 28,
- "33": 29,
- "34": 30,
- "35": 31,
- "36": 32,
- "37": 33,
- "38": 34,
- "39": 35,
- "40": 36,
- "41": 37,
- "42": 38,
- "43": 39,
- "44": 40,
- "46": 41,
- "47": 42,
- "48": 43,
- "49": 44,
- "50": 45,
- "51": 46,
- "52": 47,
- "53": 48,
- "54": 49,
- "55": 50,
- "56": 51,
- "57": 52,
- "58": 53,
- "59": 54,
- "60": 55,
- "61": 56,
- "62": 57,
- "63": 58,
- "64": 59,
- "65": 60,
- "67": 61,
- "70": 62,
- "72": 63,
- "73": 64,
- "74": 65,
- "75": 66,
- "76": 67,
- "77": 68,
- "78": 69,
- "79": 70,
- "80": 71,
- "81": 72,
- "82": 73,
- "84": 74,
- "85": 75,
- "86": 76,
- "87": 77,
- "88": 78,
- "89": 79,
- "90": 80,
- }
- self.origin2compact_mapper = {int(k): v - 1 for k, v in self.category_map_str.items()}
- self.compact2origin_mapper = {int(v - 1): int(k) for k, v in self.category_map_str.items()}
-
- def origin2compact(self, idx):
- return self.origin2compact_mapper[int(idx)]
-
- def compact2origin(self, idx):
- return self.compact2origin_mapper[int(idx)]
-
-
-def to_device(item, device):
- if isinstance(item, torch.Tensor):
- return item.to(device)
- elif isinstance(item, list):
- return [to_device(i, device) for i in item]
- elif isinstance(item, dict):
- return {k: to_device(v, device) for k, v in item.items()}
- else:
- raise NotImplementedError(
- "Call Shilong if you use other containers! type: {}".format(type(item))
- )
-
-
-#
-def get_gaussian_mean(x, axis, other_axis, softmax=True):
- """
-
- Args:
- x (float): Input images(BxCxHxW)
- axis (int): The index for weighted mean
- other_axis (int): The other index
-
- Returns: weighted index for axis, BxC
-
- """
- mat2line = torch.sum(x, axis=other_axis)
- # mat2line = mat2line / mat2line.mean() * 10
- if softmax:
- u = torch.softmax(mat2line, axis=2)
- else:
- u = mat2line / (mat2line.sum(2, keepdim=True) + 1e-6)
- size = x.shape[axis]
- ind = torch.linspace(0, 1, size).to(x.device)
- batch = x.shape[0]
- channel = x.shape[1]
- index = ind.repeat([batch, channel, 1])
- mean_position = torch.sum(index * u, dim=2)
- return mean_position
-
-
-def get_expected_points_from_map(hm, softmax=True):
- """get_gaussian_map_from_points
- B,C,H,W -> B,N,2 float(0, 1) float(0, 1)
- softargmax function
-
- Args:
- hm (float): Input images(BxCxHxW)
-
- Returns:
- weighted index for axis, BxCx2. float between 0 and 1.
-
- """
- # hm = 10*hm
- B, C, H, W = hm.shape
- y_mean = get_gaussian_mean(hm, 2, 3, softmax=softmax) # B,C
- x_mean = get_gaussian_mean(hm, 3, 2, softmax=softmax) # B,C
- # return torch.cat((x_mean.unsqueeze(-1), y_mean.unsqueeze(-1)), 2)
- return torch.stack([x_mean, y_mean], dim=2)
-
-
-# Positional encoding (section 5.1)
-# borrow from nerf
-class Embedder:
- def __init__(self, **kwargs):
- self.kwargs = kwargs
- self.create_embedding_fn()
-
- def create_embedding_fn(self):
- embed_fns = []
- d = self.kwargs["input_dims"]
- out_dim = 0
- if self.kwargs["include_input"]:
- embed_fns.append(lambda x: x)
- out_dim += d
-
- max_freq = self.kwargs["max_freq_log2"]
- N_freqs = self.kwargs["num_freqs"]
-
- if self.kwargs["log_sampling"]:
- freq_bands = 2.0 ** torch.linspace(0.0, max_freq, steps=N_freqs)
- else:
- freq_bands = torch.linspace(2.0**0.0, 2.0**max_freq, steps=N_freqs)
-
- for freq in freq_bands:
- for p_fn in self.kwargs["periodic_fns"]:
- embed_fns.append(lambda x, p_fn=p_fn, freq=freq: p_fn(x * freq))
- out_dim += d
-
- self.embed_fns = embed_fns
- self.out_dim = out_dim
-
- def embed(self, inputs):
- return torch.cat([fn(inputs) for fn in self.embed_fns], -1)
-
-
-def get_embedder(multires, i=0):
- import torch.nn as nn
-
- if i == -1:
- return nn.Identity(), 3
-
- embed_kwargs = {
- "include_input": True,
- "input_dims": 3,
- "max_freq_log2": multires - 1,
- "num_freqs": multires,
- "log_sampling": True,
- "periodic_fns": [torch.sin, torch.cos],
- }
-
- embedder_obj = Embedder(**embed_kwargs)
- embed = lambda x, eo=embedder_obj: eo.embed(x)
- return embed, embedder_obj.out_dim
-
-
-class APOPMeter:
- def __init__(self) -> None:
- self.tp = 0
- self.fp = 0
- self.tn = 0
- self.fn = 0
-
- def update(self, pred, gt):
- """
- Input:
- pred, gt: Tensor()
- """
- assert pred.shape == gt.shape
- self.tp += torch.logical_and(pred == 1, gt == 1).sum().item()
- self.fp += torch.logical_and(pred == 1, gt == 0).sum().item()
- self.tn += torch.logical_and(pred == 0, gt == 0).sum().item()
- self.tn += torch.logical_and(pred == 1, gt == 0).sum().item()
-
- def update_cm(self, tp, fp, tn, fn):
- self.tp += tp
- self.fp += fp
- self.tn += tn
- self.tn += fn
-
-
-def inverse_sigmoid(x, eps=1e-5):
- x = x.clamp(min=0, max=1)
- x1 = x.clamp(min=eps)
- x2 = (1 - x).clamp(min=eps)
- return torch.log(x1 / x2)
-
-
-def get_raw_dict(args):
- """
- return the dicf contained in args.
-
- e.g:
- >>> with open(path, 'w') as f:
- json.dump(get_raw_dict(args), f, indent=2)
- """
- if isinstance(args, argparse.Namespace):
- return vars(args)
- elif isinstance(args, dict):
- return args
- elif isinstance(args, SLConfig):
- return args._cfg_dict
- else:
- raise NotImplementedError("Unknown type {}".format(type(args)))
-
-
-def stat_tensors(tensor):
- assert tensor.dim() == 1
- tensor_sm = tensor.softmax(0)
- entropy = (tensor_sm * torch.log(tensor_sm + 1e-9)).sum()
-
- return {
- "max": tensor.max(),
- "min": tensor.min(),
- "mean": tensor.mean(),
- "var": tensor.var(),
- "std": tensor.var() ** 0.5,
- "entropy": entropy,
- }
-
-
-class NiceRepr:
- """Inherit from this class and define ``__nice__`` to "nicely" print your
- objects.
-
- Defines ``__str__`` and ``__repr__`` in terms of ``__nice__`` function
- Classes that inherit from :class:`NiceRepr` should redefine ``__nice__``.
- If the inheriting class has a ``__len__``, method then the default
- ``__nice__`` method will return its length.
-
- Example:
- >>> class Foo(NiceRepr):
- ... def __nice__(self):
- ... return 'info'
- >>> foo = Foo()
- >>> assert str(foo) == ''
- >>> assert repr(foo).startswith('>> class Bar(NiceRepr):
- ... pass
- >>> bar = Bar()
- >>> import pytest
- >>> with pytest.warns(None) as record:
- >>> assert 'object at' in str(bar)
- >>> assert 'object at' in repr(bar)
-
- Example:
- >>> class Baz(NiceRepr):
- ... def __len__(self):
- ... return 5
- >>> baz = Baz()
- >>> assert str(baz) == ''
- """
-
- def __nice__(self):
- """str: a "nice" summary string describing this module"""
- if hasattr(self, "__len__"):
- # It is a common pattern for objects to use __len__ in __nice__
- # As a convenience we define a default __nice__ for these objects
- return str(len(self))
- else:
- # In all other cases force the subclass to overload __nice__
- raise NotImplementedError(f"Define the __nice__ method for {self.__class__!r}")
-
- def __repr__(self):
- """str: the string of the module"""
- try:
- nice = self.__nice__()
- classname = self.__class__.__name__
- return f"<{classname}({nice}) at {hex(id(self))}>"
- except NotImplementedError as ex:
- warnings.warn(str(ex), category=RuntimeWarning)
- return object.__repr__(self)
-
- def __str__(self):
- """str: the string of the module"""
- try:
- classname = self.__class__.__name__
- nice = self.__nice__()
- return f"<{classname}({nice})>"
- except NotImplementedError as ex:
- warnings.warn(str(ex), category=RuntimeWarning)
- return object.__repr__(self)
-
-
-def ensure_rng(rng=None):
- """Coerces input into a random number generator.
-
- If the input is None, then a global random state is returned.
-
- If the input is a numeric value, then that is used as a seed to construct a
- random state. Otherwise the input is returned as-is.
-
- Adapted from [1]_.
-
- Args:
- rng (int | numpy.random.RandomState | None):
- if None, then defaults to the global rng. Otherwise this can be an
- integer or a RandomState class
- Returns:
- (numpy.random.RandomState) : rng -
- a numpy random number generator
-
- References:
- .. [1] https://gitlab.kitware.com/computer-vision/kwarray/blob/master/kwarray/util_random.py#L270 # noqa: E501
- """
-
- if rng is None:
- rng = np.random.mtrand._rand
- elif isinstance(rng, int):
- rng = np.random.RandomState(rng)
- else:
- rng = rng
- return rng
-
-
-def random_boxes(num=1, scale=1, rng=None):
- """Simple version of ``kwimage.Boxes.random``
-
- Returns:
- Tensor: shape (n, 4) in x1, y1, x2, y2 format.
-
- References:
- https://gitlab.kitware.com/computer-vision/kwimage/blob/master/kwimage/structs/boxes.py#L1390
-
- Example:
- >>> num = 3
- >>> scale = 512
- >>> rng = 0
- >>> boxes = random_boxes(num, scale, rng)
- >>> print(boxes)
- tensor([[280.9925, 278.9802, 308.6148, 366.1769],
- [216.9113, 330.6978, 224.0446, 456.5878],
- [405.3632, 196.3221, 493.3953, 270.7942]])
- """
- rng = ensure_rng(rng)
-
- tlbr = rng.rand(num, 4).astype(np.float32)
-
- tl_x = np.minimum(tlbr[:, 0], tlbr[:, 2])
- tl_y = np.minimum(tlbr[:, 1], tlbr[:, 3])
- br_x = np.maximum(tlbr[:, 0], tlbr[:, 2])
- br_y = np.maximum(tlbr[:, 1], tlbr[:, 3])
-
- tlbr[:, 0] = tl_x * scale
- tlbr[:, 1] = tl_y * scale
- tlbr[:, 2] = br_x * scale
- tlbr[:, 3] = br_y * scale
-
- boxes = torch.from_numpy(tlbr)
- return boxes
-
-
-class ModelEma(torch.nn.Module):
- def __init__(self, model, decay=0.9997, device=None):
- super(ModelEma, self).__init__()
- # make a copy of the model for accumulating moving average of weights
- self.module = deepcopy(model)
- self.module.eval()
-
- # import ipdb; ipdb.set_trace()
-
- self.decay = decay
- self.device = device # perform ema on different device from model if set
- if self.device is not None:
- self.module.to(device=device)
-
- def _update(self, model, update_fn):
- with torch.no_grad():
- for ema_v, model_v in zip(
- self.module.state_dict().values(), model.state_dict().values()
- ):
- if self.device is not None:
- model_v = model_v.to(device=self.device)
- ema_v.copy_(update_fn(ema_v, model_v))
-
- def update(self, model):
- self._update(model, update_fn=lambda e, m: self.decay * e + (1.0 - self.decay) * m)
-
- def set(self, model):
- self._update(model, update_fn=lambda e, m: m)
-
-
-class BestMetricSingle:
- def __init__(self, init_res=0.0, better="large") -> None:
- self.init_res = init_res
- self.best_res = init_res
- self.best_ep = -1
-
- self.better = better
- assert better in ["large", "small"]
-
- def isbetter(self, new_res, old_res):
- if self.better == "large":
- return new_res > old_res
- if self.better == "small":
- return new_res < old_res
-
- def update(self, new_res, ep):
- if self.isbetter(new_res, self.best_res):
- self.best_res = new_res
- self.best_ep = ep
- return True
- return False
-
- def __str__(self) -> str:
- return "best_res: {}\t best_ep: {}".format(self.best_res, self.best_ep)
-
- def __repr__(self) -> str:
- return self.__str__()
-
- def summary(self) -> dict:
- return {
- "best_res": self.best_res,
- "best_ep": self.best_ep,
- }
-
-
-class BestMetricHolder:
- def __init__(self, init_res=0.0, better="large", use_ema=False) -> None:
- self.best_all = BestMetricSingle(init_res, better)
- self.use_ema = use_ema
- if use_ema:
- self.best_ema = BestMetricSingle(init_res, better)
- self.best_regular = BestMetricSingle(init_res, better)
-
- def update(self, new_res, epoch, is_ema=False):
- """
- return if the results is the best.
- """
- if not self.use_ema:
- return self.best_all.update(new_res, epoch)
- else:
- if is_ema:
- self.best_ema.update(new_res, epoch)
- return self.best_all.update(new_res, epoch)
- else:
- self.best_regular.update(new_res, epoch)
- return self.best_all.update(new_res, epoch)
-
- def summary(self):
- if not self.use_ema:
- return self.best_all.summary()
-
- res = {}
- res.update({f"all_{k}": v for k, v in self.best_all.summary().items()})
- res.update({f"regular_{k}": v for k, v in self.best_regular.summary().items()})
- res.update({f"ema_{k}": v for k, v in self.best_ema.summary().items()})
- return res
-
- def __repr__(self) -> str:
- return json.dumps(self.summary(), indent=2)
-
- def __str__(self) -> str:
- return self.__repr__()
-
-
-def targets_to(targets: List[Dict[str, Any]], device):
- """Moves the target dicts to the given device."""
- excluded_keys = [
- "questionId",
- "tokens_positive",
- "strings_positive",
- "tokens",
- "dataset_name",
- "sentence_id",
- "original_img_id",
- "nb_eval",
- "task_id",
- "original_id",
- "token_span",
- "caption",
- "dataset_type",
- ]
- return [
- {k: v.to(device) if k not in excluded_keys else v for k, v in t.items()} for t in targets
- ]
-
-
-def get_phrases_from_posmap(
- posmap: torch.BoolTensor, tokenized: Dict, tokenizer: AutoTokenizer
-):
- assert isinstance(posmap, torch.Tensor), "posmap must be torch.Tensor"
- if posmap.dim() == 1:
- non_zero_idx = posmap.nonzero(as_tuple=True)[0].tolist()
- token_ids = [tokenized["input_ids"][i] for i in non_zero_idx]
- return tokenizer.decode(token_ids)
- else:
- raise NotImplementedError("posmap must be 1-dim")
diff --git a/spaces/Silentlin/DiffSinger/usr/diffsinger_task.py b/spaces/Silentlin/DiffSinger/usr/diffsinger_task.py
deleted file mode 100644
index ab586e19aa63e603f63f6be9948f314b0b80689e..0000000000000000000000000000000000000000
--- a/spaces/Silentlin/DiffSinger/usr/diffsinger_task.py
+++ /dev/null
@@ -1,490 +0,0 @@
-import torch
-
-import utils
-from utils.hparams import hparams
-from .diff.net import DiffNet
-from .diff.shallow_diffusion_tts import GaussianDiffusion, OfflineGaussianDiffusion
-from .diffspeech_task import DiffSpeechTask
-from vocoders.base_vocoder import get_vocoder_cls, BaseVocoder
-from modules.fastspeech.pe import PitchExtractor
-from modules.fastspeech.fs2 import FastSpeech2
-from modules.diffsinger_midi.fs2 import FastSpeech2MIDI
-from modules.fastspeech.tts_modules import mel2ph_to_dur
-
-from usr.diff.candidate_decoder import FFT
-from utils.pitch_utils import denorm_f0
-from tasks.tts.fs2_utils import FastSpeechDataset
-from tasks.tts.fs2 import FastSpeech2Task
-
-import numpy as np
-import os
-import torch.nn.functional as F
-
-DIFF_DECODERS = {
- 'wavenet': lambda hp: DiffNet(hp['audio_num_mel_bins']),
- 'fft': lambda hp: FFT(
- hp['hidden_size'], hp['dec_layers'], hp['dec_ffn_kernel_size'], hp['num_heads']),
-}
-
-
-class DiffSingerTask(DiffSpeechTask):
- def __init__(self):
- super(DiffSingerTask, self).__init__()
- self.dataset_cls = FastSpeechDataset
- self.vocoder: BaseVocoder = get_vocoder_cls(hparams)()
- if hparams.get('pe_enable') is not None and hparams['pe_enable']:
- self.pe = PitchExtractor().cuda()
- utils.load_ckpt(self.pe, hparams['pe_ckpt'], 'model', strict=True)
- self.pe.eval()
-
- def build_tts_model(self):
- # import torch
- # from tqdm import tqdm
- # v_min = torch.ones([80]) * 100
- # v_max = torch.ones([80]) * -100
- # for i, ds in enumerate(tqdm(self.dataset_cls('train'))):
- # v_max = torch.max(torch.max(ds['mel'].reshape(-1, 80), 0)[0], v_max)
- # v_min = torch.min(torch.min(ds['mel'].reshape(-1, 80), 0)[0], v_min)
- # if i % 100 == 0:
- # print(i, v_min, v_max)
- # print('final', v_min, v_max)
- mel_bins = hparams['audio_num_mel_bins']
- self.model = GaussianDiffusion(
- phone_encoder=self.phone_encoder,
- out_dims=mel_bins, denoise_fn=DIFF_DECODERS[hparams['diff_decoder_type']](hparams),
- timesteps=hparams['timesteps'],
- K_step=hparams['K_step'],
- loss_type=hparams['diff_loss_type'],
- spec_min=hparams['spec_min'], spec_max=hparams['spec_max'],
- )
- if hparams['fs2_ckpt'] != '':
- utils.load_ckpt(self.model.fs2, hparams['fs2_ckpt'], 'model', strict=True)
- # self.model.fs2.decoder = None
- for k, v in self.model.fs2.named_parameters():
- v.requires_grad = False
-
- def validation_step(self, sample, batch_idx):
- outputs = {}
- txt_tokens = sample['txt_tokens'] # [B, T_t]
-
- target = sample['mels'] # [B, T_s, 80]
- energy = sample['energy']
- # fs2_mel = sample['fs2_mels']
- spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
- mel2ph = sample['mel2ph']
- f0 = sample['f0']
- uv = sample['uv']
-
- outputs['losses'] = {}
-
- outputs['losses'], model_out = self.run_model(self.model, sample, return_output=True, infer=False)
-
-
- outputs['total_loss'] = sum(outputs['losses'].values())
- outputs['nsamples'] = sample['nsamples']
- outputs = utils.tensors_to_scalars(outputs)
- if batch_idx < hparams['num_valid_plots']:
- model_out = self.model(
- txt_tokens, spk_embed=spk_embed, mel2ph=mel2ph, f0=f0, uv=uv, energy=energy, ref_mels=None, infer=True)
-
- if hparams.get('pe_enable') is not None and hparams['pe_enable']:
- gt_f0 = self.pe(sample['mels'])['f0_denorm_pred'] # pe predict from GT mel
- pred_f0 = self.pe(model_out['mel_out'])['f0_denorm_pred'] # pe predict from Pred mel
- else:
- gt_f0 = denorm_f0(sample['f0'], sample['uv'], hparams)
- pred_f0 = model_out.get('f0_denorm')
- self.plot_wav(batch_idx, sample['mels'], model_out['mel_out'], is_mel=True, gt_f0=gt_f0, f0=pred_f0)
- self.plot_mel(batch_idx, sample['mels'], model_out['mel_out'], name=f'diffmel_{batch_idx}')
- self.plot_mel(batch_idx, sample['mels'], model_out['fs2_mel'], name=f'fs2mel_{batch_idx}')
- return outputs
-
-
-class ShallowDiffusionOfflineDataset(FastSpeechDataset):
- def __getitem__(self, index):
- sample = super(ShallowDiffusionOfflineDataset, self).__getitem__(index)
- item = self._get_item(index)
-
- if self.prefix != 'train' and hparams['fs2_ckpt'] != '':
- fs2_ckpt = os.path.dirname(hparams['fs2_ckpt'])
- item_name = item['item_name']
- fs2_mel = torch.Tensor(np.load(f'{fs2_ckpt}/P_mels_npy/{item_name}.npy')) # ~M generated by FFT-singer.
- sample['fs2_mel'] = fs2_mel
- return sample
-
- def collater(self, samples):
- batch = super(ShallowDiffusionOfflineDataset, self).collater(samples)
- if self.prefix != 'train' and hparams['fs2_ckpt'] != '':
- batch['fs2_mels'] = utils.collate_2d([s['fs2_mel'] for s in samples], 0.0)
- return batch
-
-
-class DiffSingerOfflineTask(DiffSingerTask):
- def __init__(self):
- super(DiffSingerOfflineTask, self).__init__()
- self.dataset_cls = ShallowDiffusionOfflineDataset
-
- def build_tts_model(self):
- mel_bins = hparams['audio_num_mel_bins']
- self.model = OfflineGaussianDiffusion(
- phone_encoder=self.phone_encoder,
- out_dims=mel_bins, denoise_fn=DIFF_DECODERS[hparams['diff_decoder_type']](hparams),
- timesteps=hparams['timesteps'],
- K_step=hparams['K_step'],
- loss_type=hparams['diff_loss_type'],
- spec_min=hparams['spec_min'], spec_max=hparams['spec_max'],
- )
- # if hparams['fs2_ckpt'] != '':
- # utils.load_ckpt(self.model.fs2, hparams['fs2_ckpt'], 'model', strict=True)
- # self.model.fs2.decoder = None
-
- def run_model(self, model, sample, return_output=False, infer=False):
- txt_tokens = sample['txt_tokens'] # [B, T_t]
- target = sample['mels'] # [B, T_s, 80]
- mel2ph = sample['mel2ph'] # [B, T_s]
- f0 = sample['f0']
- uv = sample['uv']
- energy = sample['energy']
- fs2_mel = None #sample['fs2_mels']
- spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
- if hparams['pitch_type'] == 'cwt':
- cwt_spec = sample[f'cwt_spec']
- f0_mean = sample['f0_mean']
- f0_std = sample['f0_std']
- sample['f0_cwt'] = f0 = model.cwt2f0_norm(cwt_spec, f0_mean, f0_std, mel2ph)
-
- output = model(txt_tokens, mel2ph=mel2ph, spk_embed=spk_embed,
- ref_mels=[target, fs2_mel], f0=f0, uv=uv, energy=energy, infer=infer)
-
- losses = {}
- if 'diff_loss' in output:
- losses['mel'] = output['diff_loss']
- # self.add_dur_loss(output['dur'], mel2ph, txt_tokens, losses=losses)
- # if hparams['use_pitch_embed']:
- # self.add_pitch_loss(output, sample, losses)
- if hparams['use_energy_embed']:
- self.add_energy_loss(output['energy_pred'], energy, losses)
-
- if not return_output:
- return losses
- else:
- return losses, output
-
- def validation_step(self, sample, batch_idx):
- outputs = {}
- txt_tokens = sample['txt_tokens'] # [B, T_t]
-
- target = sample['mels'] # [B, T_s, 80]
- energy = sample['energy']
- # fs2_mel = sample['fs2_mels']
- spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
- mel2ph = sample['mel2ph']
- f0 = sample['f0']
- uv = sample['uv']
-
- outputs['losses'] = {}
-
- outputs['losses'], model_out = self.run_model(self.model, sample, return_output=True, infer=False)
-
-
- outputs['total_loss'] = sum(outputs['losses'].values())
- outputs['nsamples'] = sample['nsamples']
- outputs = utils.tensors_to_scalars(outputs)
- if batch_idx < hparams['num_valid_plots']:
- fs2_mel = sample['fs2_mels']
- model_out = self.model(
- txt_tokens, spk_embed=spk_embed, mel2ph=mel2ph, f0=f0, uv=uv, energy=energy,
- ref_mels=[None, fs2_mel], infer=True)
- if hparams.get('pe_enable') is not None and hparams['pe_enable']:
- gt_f0 = self.pe(sample['mels'])['f0_denorm_pred'] # pe predict from GT mel
- pred_f0 = self.pe(model_out['mel_out'])['f0_denorm_pred'] # pe predict from Pred mel
- else:
- gt_f0 = denorm_f0(sample['f0'], sample['uv'], hparams)
- pred_f0 = model_out.get('f0_denorm')
- self.plot_wav(batch_idx, sample['mels'], model_out['mel_out'], is_mel=True, gt_f0=gt_f0, f0=pred_f0)
- self.plot_mel(batch_idx, sample['mels'], model_out['mel_out'], name=f'diffmel_{batch_idx}')
- self.plot_mel(batch_idx, sample['mels'], fs2_mel, name=f'fs2mel_{batch_idx}')
- return outputs
-
- def test_step(self, sample, batch_idx):
- spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
- txt_tokens = sample['txt_tokens']
- energy = sample['energy']
- if hparams['profile_infer']:
- pass
- else:
- mel2ph, uv, f0 = None, None, None
- if hparams['use_gt_dur']:
- mel2ph = sample['mel2ph']
- if hparams['use_gt_f0']:
- f0 = sample['f0']
- uv = sample['uv']
- fs2_mel = sample['fs2_mels']
- outputs = self.model(
- txt_tokens, spk_embed=spk_embed, mel2ph=mel2ph, f0=f0, uv=uv, ref_mels=[None, fs2_mel], energy=energy,
- infer=True)
- sample['outputs'] = self.model.out2mel(outputs['mel_out'])
- sample['mel2ph_pred'] = outputs['mel2ph']
-
- if hparams.get('pe_enable') is not None and hparams['pe_enable']:
- sample['f0'] = self.pe(sample['mels'])['f0_denorm_pred'] # pe predict from GT mel
- sample['f0_pred'] = self.pe(sample['outputs'])['f0_denorm_pred'] # pe predict from Pred mel
- else:
- sample['f0'] = denorm_f0(sample['f0'], sample['uv'], hparams)
- sample['f0_pred'] = outputs.get('f0_denorm')
- return self.after_infer(sample)
-
-
-class MIDIDataset(FastSpeechDataset):
- def __getitem__(self, index):
- sample = super(MIDIDataset, self).__getitem__(index)
- item = self._get_item(index)
- sample['f0_midi'] = torch.FloatTensor(item['f0_midi'])
- sample['pitch_midi'] = torch.LongTensor(item['pitch_midi'])[:hparams['max_frames']]
-
- return sample
-
- def collater(self, samples):
- batch = super(MIDIDataset, self).collater(samples)
- batch['f0_midi'] = utils.collate_1d([s['f0_midi'] for s in samples], 0.0)
- batch['pitch_midi'] = utils.collate_1d([s['pitch_midi'] for s in samples], 0)
- # print((batch['pitch_midi'] == f0_to_coarse(batch['f0_midi'])).all())
- return batch
-
-
-class OpencpopDataset(FastSpeechDataset):
- def __getitem__(self, index):
- sample = super(OpencpopDataset, self).__getitem__(index)
- item = self._get_item(index)
- sample['pitch_midi'] = torch.LongTensor(item['pitch_midi'])[:hparams['max_frames']]
- sample['midi_dur'] = torch.FloatTensor(item['midi_dur'])[:hparams['max_frames']]
- sample['is_slur'] = torch.LongTensor(item['is_slur'])[:hparams['max_frames']]
- sample['word_boundary'] = torch.LongTensor(item['word_boundary'])[:hparams['max_frames']]
- return sample
-
- def collater(self, samples):
- batch = super(OpencpopDataset, self).collater(samples)
- batch['pitch_midi'] = utils.collate_1d([s['pitch_midi'] for s in samples], 0)
- batch['midi_dur'] = utils.collate_1d([s['midi_dur'] for s in samples], 0)
- batch['is_slur'] = utils.collate_1d([s['is_slur'] for s in samples], 0)
- batch['word_boundary'] = utils.collate_1d([s['word_boundary'] for s in samples], 0)
- return batch
-
-
-class DiffSingerMIDITask(DiffSingerTask):
- def __init__(self):
- super(DiffSingerMIDITask, self).__init__()
- # self.dataset_cls = MIDIDataset
- self.dataset_cls = OpencpopDataset
-
- def run_model(self, model, sample, return_output=False, infer=False):
- txt_tokens = sample['txt_tokens'] # [B, T_t]
- target = sample['mels'] # [B, T_s, 80]
- # mel2ph = sample['mel2ph'] if hparams['use_gt_dur'] else None # [B, T_s]
- mel2ph = sample['mel2ph']
- if hparams.get('switch_midi2f0_step') is not None and self.global_step > hparams['switch_midi2f0_step']:
- f0 = None
- uv = None
- else:
- f0 = sample['f0']
- uv = sample['uv']
- energy = sample['energy']
-
- spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
- if hparams['pitch_type'] == 'cwt':
- cwt_spec = sample[f'cwt_spec']
- f0_mean = sample['f0_mean']
- f0_std = sample['f0_std']
- sample['f0_cwt'] = f0 = model.cwt2f0_norm(cwt_spec, f0_mean, f0_std, mel2ph)
-
- output = model(txt_tokens, mel2ph=mel2ph, spk_embed=spk_embed,
- ref_mels=target, f0=f0, uv=uv, energy=energy, infer=infer, pitch_midi=sample['pitch_midi'],
- midi_dur=sample.get('midi_dur'), is_slur=sample.get('is_slur'))
-
- losses = {}
- if 'diff_loss' in output:
- losses['mel'] = output['diff_loss']
- self.add_dur_loss(output['dur'], mel2ph, txt_tokens, sample['word_boundary'], losses=losses)
- if hparams['use_pitch_embed']:
- self.add_pitch_loss(output, sample, losses)
- if hparams['use_energy_embed']:
- self.add_energy_loss(output['energy_pred'], energy, losses)
- if not return_output:
- return losses
- else:
- return losses, output
-
- def validation_step(self, sample, batch_idx):
- outputs = {}
- txt_tokens = sample['txt_tokens'] # [B, T_t]
-
- target = sample['mels'] # [B, T_s, 80]
- energy = sample['energy']
- # fs2_mel = sample['fs2_mels']
- spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
- mel2ph = sample['mel2ph']
-
- outputs['losses'] = {}
-
- outputs['losses'], model_out = self.run_model(self.model, sample, return_output=True, infer=False)
-
- outputs['total_loss'] = sum(outputs['losses'].values())
- outputs['nsamples'] = sample['nsamples']
- outputs = utils.tensors_to_scalars(outputs)
- if batch_idx < hparams['num_valid_plots']:
- model_out = self.model(
- txt_tokens, spk_embed=spk_embed, mel2ph=mel2ph, f0=None, uv=None, energy=energy, ref_mels=None, infer=True,
- pitch_midi=sample['pitch_midi'], midi_dur=sample.get('midi_dur'), is_slur=sample.get('is_slur'))
-
- if hparams.get('pe_enable') is not None and hparams['pe_enable']:
- gt_f0 = self.pe(sample['mels'])['f0_denorm_pred'] # pe predict from GT mel
- pred_f0 = self.pe(model_out['mel_out'])['f0_denorm_pred'] # pe predict from Pred mel
- else:
- gt_f0 = denorm_f0(sample['f0'], sample['uv'], hparams)
- pred_f0 = model_out.get('f0_denorm')
- self.plot_wav(batch_idx, sample['mels'], model_out['mel_out'], is_mel=True, gt_f0=gt_f0, f0=pred_f0)
- self.plot_mel(batch_idx, sample['mels'], model_out['mel_out'], name=f'diffmel_{batch_idx}')
- self.plot_mel(batch_idx, sample['mels'], model_out['fs2_mel'], name=f'fs2mel_{batch_idx}')
- if hparams['use_pitch_embed']:
- self.plot_pitch(batch_idx, sample, model_out)
- return outputs
-
- def add_dur_loss(self, dur_pred, mel2ph, txt_tokens, wdb, losses=None):
- """
- :param dur_pred: [B, T], float, log scale
- :param mel2ph: [B, T]
- :param txt_tokens: [B, T]
- :param losses:
- :return:
- """
- B, T = txt_tokens.shape
- nonpadding = (txt_tokens != 0).float()
- dur_gt = mel2ph_to_dur(mel2ph, T).float() * nonpadding
- is_sil = torch.zeros_like(txt_tokens).bool()
- for p in self.sil_ph:
- is_sil = is_sil | (txt_tokens == self.phone_encoder.encode(p)[0])
- is_sil = is_sil.float() # [B, T_txt]
-
- # phone duration loss
- if hparams['dur_loss'] == 'mse':
- losses['pdur'] = F.mse_loss(dur_pred, (dur_gt + 1).log(), reduction='none')
- losses['pdur'] = (losses['pdur'] * nonpadding).sum() / nonpadding.sum()
- dur_pred = (dur_pred.exp() - 1).clamp(min=0)
- else:
- raise NotImplementedError
-
- # use linear scale for sent and word duration
- if hparams['lambda_word_dur'] > 0:
- idx = F.pad(wdb.cumsum(axis=1), (1, 0))[:, :-1]
- # word_dur_g = dur_gt.new_zeros([B, idx.max() + 1]).scatter_(1, idx, midi_dur) # midi_dur can be implied by add gt-ph_dur
- word_dur_p = dur_pred.new_zeros([B, idx.max() + 1]).scatter_add(1, idx, dur_pred)
- word_dur_g = dur_gt.new_zeros([B, idx.max() + 1]).scatter_add(1, idx, dur_gt)
- wdur_loss = F.mse_loss((word_dur_p + 1).log(), (word_dur_g + 1).log(), reduction='none')
- word_nonpadding = (word_dur_g > 0).float()
- wdur_loss = (wdur_loss * word_nonpadding).sum() / word_nonpadding.sum()
- losses['wdur'] = wdur_loss * hparams['lambda_word_dur']
- if hparams['lambda_sent_dur'] > 0:
- sent_dur_p = dur_pred.sum(-1)
- sent_dur_g = dur_gt.sum(-1)
- sdur_loss = F.mse_loss((sent_dur_p + 1).log(), (sent_dur_g + 1).log(), reduction='mean')
- losses['sdur'] = sdur_loss.mean() * hparams['lambda_sent_dur']
-
-
-class AuxDecoderMIDITask(FastSpeech2Task):
- def __init__(self):
- super().__init__()
- # self.dataset_cls = MIDIDataset
- self.dataset_cls = OpencpopDataset
-
- def build_tts_model(self):
- if hparams.get('use_midi') is not None and hparams['use_midi']:
- self.model = FastSpeech2MIDI(self.phone_encoder)
- else:
- self.model = FastSpeech2(self.phone_encoder)
-
- def run_model(self, model, sample, return_output=False):
- txt_tokens = sample['txt_tokens'] # [B, T_t]
- target = sample['mels'] # [B, T_s, 80]
- mel2ph = sample['mel2ph'] # [B, T_s]
- f0 = sample['f0']
- uv = sample['uv']
- energy = sample['energy']
-
- spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
- if hparams['pitch_type'] == 'cwt':
- cwt_spec = sample[f'cwt_spec']
- f0_mean = sample['f0_mean']
- f0_std = sample['f0_std']
- sample['f0_cwt'] = f0 = model.cwt2f0_norm(cwt_spec, f0_mean, f0_std, mel2ph)
-
- output = model(txt_tokens, mel2ph=mel2ph, spk_embed=spk_embed,
- ref_mels=target, f0=f0, uv=uv, energy=energy, infer=False, pitch_midi=sample['pitch_midi'],
- midi_dur=sample.get('midi_dur'), is_slur=sample.get('is_slur'))
-
- losses = {}
- self.add_mel_loss(output['mel_out'], target, losses)
- self.add_dur_loss(output['dur'], mel2ph, txt_tokens, sample['word_boundary'], losses=losses)
- if hparams['use_pitch_embed']:
- self.add_pitch_loss(output, sample, losses)
- if hparams['use_energy_embed']:
- self.add_energy_loss(output['energy_pred'], energy, losses)
- if not return_output:
- return losses
- else:
- return losses, output
-
- def add_dur_loss(self, dur_pred, mel2ph, txt_tokens, wdb, losses=None):
- """
- :param dur_pred: [B, T], float, log scale
- :param mel2ph: [B, T]
- :param txt_tokens: [B, T]
- :param losses:
- :return:
- """
- B, T = txt_tokens.shape
- nonpadding = (txt_tokens != 0).float()
- dur_gt = mel2ph_to_dur(mel2ph, T).float() * nonpadding
- is_sil = torch.zeros_like(txt_tokens).bool()
- for p in self.sil_ph:
- is_sil = is_sil | (txt_tokens == self.phone_encoder.encode(p)[0])
- is_sil = is_sil.float() # [B, T_txt]
-
- # phone duration loss
- if hparams['dur_loss'] == 'mse':
- losses['pdur'] = F.mse_loss(dur_pred, (dur_gt + 1).log(), reduction='none')
- losses['pdur'] = (losses['pdur'] * nonpadding).sum() / nonpadding.sum()
- dur_pred = (dur_pred.exp() - 1).clamp(min=0)
- else:
- raise NotImplementedError
-
- # use linear scale for sent and word duration
- if hparams['lambda_word_dur'] > 0:
- idx = F.pad(wdb.cumsum(axis=1), (1, 0))[:, :-1]
- # word_dur_g = dur_gt.new_zeros([B, idx.max() + 1]).scatter_(1, idx, midi_dur) # midi_dur can be implied by add gt-ph_dur
- word_dur_p = dur_pred.new_zeros([B, idx.max() + 1]).scatter_add(1, idx, dur_pred)
- word_dur_g = dur_gt.new_zeros([B, idx.max() + 1]).scatter_add(1, idx, dur_gt)
- wdur_loss = F.mse_loss((word_dur_p + 1).log(), (word_dur_g + 1).log(), reduction='none')
- word_nonpadding = (word_dur_g > 0).float()
- wdur_loss = (wdur_loss * word_nonpadding).sum() / word_nonpadding.sum()
- losses['wdur'] = wdur_loss * hparams['lambda_word_dur']
- if hparams['lambda_sent_dur'] > 0:
- sent_dur_p = dur_pred.sum(-1)
- sent_dur_g = dur_gt.sum(-1)
- sdur_loss = F.mse_loss((sent_dur_p + 1).log(), (sent_dur_g + 1).log(), reduction='mean')
- losses['sdur'] = sdur_loss.mean() * hparams['lambda_sent_dur']
-
- def validation_step(self, sample, batch_idx):
- outputs = {}
- outputs['losses'] = {}
- outputs['losses'], model_out = self.run_model(self.model, sample, return_output=True)
- outputs['total_loss'] = sum(outputs['losses'].values())
- outputs['nsamples'] = sample['nsamples']
- mel_out = self.model.out2mel(model_out['mel_out'])
- outputs = utils.tensors_to_scalars(outputs)
- # if sample['mels'].shape[0] == 1:
- # self.add_laplace_var(mel_out, sample['mels'], outputs)
- if batch_idx < hparams['num_valid_plots']:
- self.plot_mel(batch_idx, sample['mels'], mel_out)
- self.plot_dur(batch_idx, sample, model_out)
- if hparams['use_pitch_embed']:
- self.plot_pitch(batch_idx, sample, model_out)
- return outputs
\ No newline at end of file
diff --git a/spaces/SpacesExamples/fastapi_t5/static/index.html b/spaces/SpacesExamples/fastapi_t5/static/index.html
deleted file mode 100644
index 7e2ccc20465e2ed59250df44c42c4e18c9ccaa97..0000000000000000000000000000000000000000
--- a/spaces/SpacesExamples/fastapi_t5/static/index.html
+++ /dev/null
@@ -1,36 +0,0 @@
-
-
-
-
-
- Fast API 🤗 Space served with Uvicorn
-
-
-
-
-
-
-
""".format(
- result["meta"]["url"], result["meta"]["url"]
- )
- if "meta" in result and result["meta"] is not None and "url" in result["meta"]
- else ""
- )
- docid_html = get_docid_html(result["docid"])
- results_html += """{}
-
-Tool design for Roots: [URL](https://huggingface.co/spaces/bigscience-data/scisearch/blob/main/roots_search_tool_specs.pdf).
-Bloom on Wikipedia: [URL](https://en.wikipedia.org/wiki/BLOOM_(language_model)).
-Bloom Video Playlist: [URL](https://www.youtube.com/playlist?list=PLHgX2IExbFouqnsIqziThlPCX_miiDq14).
-Access full corpus check [URL](https://forms.gle/qyYswbEL5kA23Wu99).
-
-Big Science - How to get started
-Big Science is a 176B parameter new ML model that was trained on a set of datasets for Natural Language processing, and many other tasks that are not yet explored.. Below is the set of the papers, models, links, and datasets around big science which promises to be the best, most recent large model of its kind benefitting all science pursuits.
-
-Model: https://huggingface.co/bigscience/bloom
-Papers:
-BLOOM: A 176B-Parameter Open-Access Multilingual Language Model https://arxiv.org/abs/2211.05100
-Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism https://arxiv.org/abs/1909.08053
-8-bit Optimizers via Block-wise Quantization https://arxiv.org/abs/2110.02861
-Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation https://arxiv.org/abs/2108.12409
-https://huggingface.co/models?other=doi:10.57967/hf/0003
-217 Other Models optimizing use of bloom via specialization: https://huggingface.co/models?other=bloom
-Datasets
-Universal Dependencies: https://paperswithcode.com/dataset/universal-dependencies
-WMT 2014: https://paperswithcode.com/dataset/wmt-2014
-The Pile: https://paperswithcode.com/dataset/the-pile
-HumanEval: https://paperswithcode.com/dataset/humaneval
-FLORES-101: https://paperswithcode.com/dataset/flores-101
-CrowS-Pairs: https://paperswithcode.com/dataset/crows-pairs
-WikiLingua: https://paperswithcode.com/dataset/wikilingua
-MTEB: https://paperswithcode.com/dataset/mteb
-xP3: https://paperswithcode.com/dataset/xp3
-DiaBLa: https://paperswithcode.com/dataset/diabla
-
-"""
-
-
-if __name__ == "__main__":
- demo = gr.Blocks(
- css=".underline-on-hover:hover { text-decoration: underline; } .flagging { font-size:12px; color:Silver; }"
- )
-
- with demo:
- with gr.Row():
- gr.Markdown(value=description)
- with gr.Row():
- query = gr.Textbox(lines=1, max_lines=1, placeholder="Type your query here...", label="Query")
- with gr.Row():
- lang = gr.Dropdown(
- choices=[
- "ar",
- "ca",
- "code",
- "en",
- "es",
- "eu",
- "fr",
- "id",
- "indic",
- "nigercongo",
- "pt",
- "vi",
- "zh",
- "detect_language",
- "all",
- ],
- value="en",
- label="Language",
- )
- with gr.Row():
- k = gr.Slider(1, 100, value=10, step=1, label="Max Results")
- with gr.Row():
- submit_btn = gr.Button("Submit")
- with gr.Row():
- results = gr.HTML(label="Results")
- flag_description = """
-
- If you choose to flag your search, we will save the query, language and the number of results you requested.
- Please consider adding any additional context in the box on the right.
"""
- with gr.Column(visible=False) as flagging_form:
- flag_txt = gr.Textbox(
- lines=1,
- placeholder="Type here...",
- label="""If you choose to flag your search, we will save the query, language and the number of results
- you requested. Please consider adding relevant additional context below:""",
- )
- flag_btn = gr.Button("Flag Results")
- flag_btn.click(flag, inputs=[query, lang, k, flag_txt], outputs=[flag_txt])
-
- def submit(query, lang, k):
- query = query.strip()
- if query is None or query == "":
- return "", ""
- return {
- results: scisearch(query, lang, k),
- flagging_form: gr.update(visible=True),
- }
-
- query.submit(fn=submit, inputs=[query, lang, k], outputs=[results, flagging_form])
- submit_btn.click(submit, inputs=[query, lang, k], outputs=[results, flagging_form])
-
- demo.launch(enable_queue=True, debug=True)
\ No newline at end of file
diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/postprocessing/SSAOPass.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/postprocessing/SSAOPass.js
deleted file mode 100644
index 98b2ef170e28a668f69b7197184d2424ac0ed4f0..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/examples/js/postprocessing/SSAOPass.js
+++ /dev/null
@@ -1,404 +0,0 @@
-/**
- * @author Mugen87 / https://github.com/Mugen87
- */
-
-THREE.SSAOPass = function ( scene, camera, width, height ) {
-
- THREE.Pass.call( this );
-
- this.width = ( width !== undefined ) ? width : 512;
- this.height = ( height !== undefined ) ? height : 512;
-
- this.clear = true;
-
- this.camera = camera;
- this.scene = scene;
-
- this.kernelRadius = 8;
- this.kernelSize = 32;
- this.kernel = [];
- this.noiseTexture = null;
- this.output = 0;
-
- this.minDistance = 0.005;
- this.maxDistance = 0.1;
-
- //
-
- this.generateSampleKernel();
- this.generateRandomKernelRotations();
-
- // beauty render target with depth buffer
-
- var depthTexture = new THREE.DepthTexture();
- depthTexture.type = THREE.UnsignedShortType;
- depthTexture.minFilter = THREE.NearestFilter;
- depthTexture.maxFilter = THREE.NearestFilter;
-
- this.beautyRenderTarget = new THREE.WebGLRenderTarget( this.width, this.height, {
- minFilter: THREE.LinearFilter,
- magFilter: THREE.LinearFilter,
- format: THREE.RGBAFormat,
- depthTexture: depthTexture,
- depthBuffer: true
- } );
-
- // normal render target
-
- this.normalRenderTarget = new THREE.WebGLRenderTarget( this.width, this.height, {
- minFilter: THREE.NearestFilter,
- magFilter: THREE.NearestFilter,
- format: THREE.RGBAFormat
- } );
-
- // ssao render target
-
- this.ssaoRenderTarget = new THREE.WebGLRenderTarget( this.width, this.height, {
- minFilter: THREE.LinearFilter,
- magFilter: THREE.LinearFilter,
- format: THREE.RGBAFormat
- } );
-
- this.blurRenderTarget = this.ssaoRenderTarget.clone();
-
- // ssao material
-
- if ( THREE.SSAOShader === undefined ) {
-
- console.error( 'THREE.SSAOPass: The pass relies on THREE.SSAOShader.' );
-
- }
-
- this.ssaoMaterial = new THREE.ShaderMaterial( {
- defines: Object.assign( {}, THREE.SSAOShader.defines ),
- uniforms: THREE.UniformsUtils.clone( THREE.SSAOShader.uniforms ),
- vertexShader: THREE.SSAOShader.vertexShader,
- fragmentShader: THREE.SSAOShader.fragmentShader,
- blending: THREE.NoBlending
- } );
-
- this.ssaoMaterial.uniforms[ 'tDiffuse' ].value = this.beautyRenderTarget.texture;
- this.ssaoMaterial.uniforms[ 'tNormal' ].value = this.normalRenderTarget.texture;
- this.ssaoMaterial.uniforms[ 'tDepth' ].value = this.beautyRenderTarget.depthTexture;
- this.ssaoMaterial.uniforms[ 'tNoise' ].value = this.noiseTexture;
- this.ssaoMaterial.uniforms[ 'kernel' ].value = this.kernel;
- this.ssaoMaterial.uniforms[ 'cameraNear' ].value = this.camera.near;
- this.ssaoMaterial.uniforms[ 'cameraFar' ].value = this.camera.far;
- this.ssaoMaterial.uniforms[ 'resolution' ].value.set( this.width, this.height );
- this.ssaoMaterial.uniforms[ 'cameraProjectionMatrix' ].value.copy( this.camera.projectionMatrix );
- this.ssaoMaterial.uniforms[ 'cameraInverseProjectionMatrix' ].value.getInverse( this.camera.projectionMatrix );
-
- // normal material
-
- this.normalMaterial = new THREE.MeshNormalMaterial();
- this.normalMaterial.blending = THREE.NoBlending;
-
- // blur material
-
- this.blurMaterial = new THREE.ShaderMaterial( {
- defines: Object.assign( {}, THREE.SSAOBlurShader.defines ),
- uniforms: THREE.UniformsUtils.clone( THREE.SSAOBlurShader.uniforms ),
- vertexShader: THREE.SSAOBlurShader.vertexShader,
- fragmentShader: THREE.SSAOBlurShader.fragmentShader
- } );
- this.blurMaterial.uniforms[ 'tDiffuse' ].value = this.ssaoRenderTarget.texture;
- this.blurMaterial.uniforms[ 'resolution' ].value.set( this.width, this.height );
-
- // material for rendering the depth
-
- this.depthRenderMaterial = new THREE.ShaderMaterial( {
- defines: Object.assign( {}, THREE.SSAODepthShader.defines ),
- uniforms: THREE.UniformsUtils.clone( THREE.SSAODepthShader.uniforms ),
- vertexShader: THREE.SSAODepthShader.vertexShader,
- fragmentShader: THREE.SSAODepthShader.fragmentShader,
- blending: THREE.NoBlending
- } );
- this.depthRenderMaterial.uniforms[ 'tDepth' ].value = this.beautyRenderTarget.depthTexture;
- this.depthRenderMaterial.uniforms[ 'cameraNear' ].value = this.camera.near;
- this.depthRenderMaterial.uniforms[ 'cameraFar' ].value = this.camera.far;
-
- // material for rendering the content of a render target
-
- this.copyMaterial = new THREE.ShaderMaterial( {
- uniforms: THREE.UniformsUtils.clone( THREE.CopyShader.uniforms ),
- vertexShader: THREE.CopyShader.vertexShader,
- fragmentShader: THREE.CopyShader.fragmentShader,
- transparent: true,
- depthTest: false,
- depthWrite: false,
- blendSrc: THREE.DstColorFactor,
- blendDst: THREE.ZeroFactor,
- blendEquation: THREE.AddEquation,
- blendSrcAlpha: THREE.DstAlphaFactor,
- blendDstAlpha: THREE.ZeroFactor,
- blendEquationAlpha: THREE.AddEquation
- } );
-
- this.fsQuad = new THREE.Pass.FullScreenQuad( null );
-
- this.originalClearColor = new THREE.Color();
-
-};
-
-THREE.SSAOPass.prototype = Object.assign( Object.create( THREE.Pass.prototype ), {
-
- constructor: THREE.SSAOPass,
-
- dispose: function () {
-
- // dispose render targets
-
- this.beautyRenderTarget.dispose();
- this.normalRenderTarget.dispose();
- this.ssaoRenderTarget.dispose();
- this.blurRenderTarget.dispose();
-
- // dispose geometry
-
- this.quad.geometry.dispose();
-
- // dispose materials
-
- this.normalMaterial.dispose();
- this.blurMaterial.dispose();
- this.copyMaterial.dispose();
- this.depthRenderMaterial.dispose();
-
- },
-
- render: function ( renderer, writeBuffer /*, readBuffer, deltaTime, maskActive */ ) {
-
- // render beauty and depth
-
- renderer.setRenderTarget( this.beautyRenderTarget );
- renderer.clear();
- renderer.render( this.scene, this.camera );
-
- // render normals
-
- this.renderOverride( renderer, this.normalMaterial, this.normalRenderTarget, 0x7777ff, 1.0 );
-
- // render SSAO
-
- this.ssaoMaterial.uniforms[ 'kernelRadius' ].value = this.kernelRadius;
- this.ssaoMaterial.uniforms[ 'minDistance' ].value = this.minDistance;
- this.ssaoMaterial.uniforms[ 'maxDistance' ].value = this.maxDistance;
- this.renderPass( renderer, this.ssaoMaterial, this.ssaoRenderTarget );
-
- // render blur
-
- this.renderPass( renderer, this.blurMaterial, this.blurRenderTarget );
-
- // output result to screen
-
- switch ( this.output ) {
-
- case THREE.SSAOPass.OUTPUT.SSAO:
-
- this.copyMaterial.uniforms[ 'tDiffuse' ].value = this.ssaoRenderTarget.texture;
- this.copyMaterial.blending = THREE.NoBlending;
- this.renderPass( renderer, this.copyMaterial, null );
-
- break;
-
- case THREE.SSAOPass.OUTPUT.Blur:
-
- this.copyMaterial.uniforms[ 'tDiffuse' ].value = this.blurRenderTarget.texture;
- this.copyMaterial.blending = THREE.NoBlending;
- this.renderPass( renderer, this.copyMaterial, null );
-
- break;
-
- case THREE.SSAOPass.OUTPUT.Beauty:
-
- this.copyMaterial.uniforms[ 'tDiffuse' ].value = this.beautyRenderTarget.texture;
- this.copyMaterial.blending = THREE.NoBlending;
- this.renderPass( renderer, this.copyMaterial, null );
-
- break;
-
- case THREE.SSAOPass.OUTPUT.Depth:
-
- this.renderPass( renderer, this.depthRenderMaterial, null );
-
- break;
-
- case THREE.SSAOPass.OUTPUT.Normal:
-
- this.copyMaterial.uniforms[ 'tDiffuse' ].value = this.normalRenderTarget.texture;
- this.copyMaterial.blending = THREE.NoBlending;
- this.renderPass( renderer, this.copyMaterial, null );
-
- break;
-
- case THREE.SSAOPass.OUTPUT.Default:
-
- this.copyMaterial.uniforms[ 'tDiffuse' ].value = this.beautyRenderTarget.texture;
- this.copyMaterial.blending = THREE.NoBlending;
- this.renderPass( renderer, this.copyMaterial, null );
-
- this.copyMaterial.uniforms[ 'tDiffuse' ].value = this.blurRenderTarget.texture;
- this.copyMaterial.blending = THREE.CustomBlending;
- this.renderPass( renderer, this.copyMaterial, this.renderToScreen ? null : writeBuffer );
-
- break;
-
- default:
- console.warn( 'THREE.SSAOPass: Unknown output type.' );
-
- }
-
- },
-
- renderPass: function ( renderer, passMaterial, renderTarget, clearColor, clearAlpha ) {
-
- // save original state
- this.originalClearColor.copy( renderer.getClearColor() );
- var originalClearAlpha = renderer.getClearAlpha();
- var originalAutoClear = renderer.autoClear;
-
- renderer.setRenderTarget( renderTarget );
-
- // setup pass state
- renderer.autoClear = false;
- if ( ( clearColor !== undefined ) && ( clearColor !== null ) ) {
-
- renderer.setClearColor( clearColor );
- renderer.setClearAlpha( clearAlpha || 0.0 );
- renderer.clear();
-
- }
-
- this.fsQuad.material = passMaterial;
- this.fsQuad.render( renderer );
-
- // restore original state
- renderer.autoClear = originalAutoClear;
- renderer.setClearColor( this.originalClearColor );
- renderer.setClearAlpha( originalClearAlpha );
-
- },
-
- renderOverride: function ( renderer, overrideMaterial, renderTarget, clearColor, clearAlpha ) {
-
- this.originalClearColor.copy( renderer.getClearColor() );
- var originalClearAlpha = renderer.getClearAlpha();
- var originalAutoClear = renderer.autoClear;
-
- renderer.setRenderTarget( renderTarget );
- renderer.autoClear = false;
-
- clearColor = overrideMaterial.clearColor || clearColor;
- clearAlpha = overrideMaterial.clearAlpha || clearAlpha;
-
- if ( ( clearColor !== undefined ) && ( clearColor !== null ) ) {
-
- renderer.setClearColor( clearColor );
- renderer.setClearAlpha( clearAlpha || 0.0 );
- renderer.clear();
-
- }
-
- this.scene.overrideMaterial = overrideMaterial;
- renderer.render( this.scene, this.camera );
- this.scene.overrideMaterial = null;
-
- // restore original state
-
- renderer.autoClear = originalAutoClear;
- renderer.setClearColor( this.originalClearColor );
- renderer.setClearAlpha( originalClearAlpha );
-
- },
-
- setSize: function ( width, height ) {
-
- this.width = width;
- this.height = height;
-
- this.beautyRenderTarget.setSize( width, height );
- this.ssaoRenderTarget.setSize( width, height );
- this.normalRenderTarget.setSize( width, height );
- this.blurRenderTarget.setSize( width, height );
-
- this.ssaoMaterial.uniforms[ 'resolution' ].value.set( width, height );
- this.ssaoMaterial.uniforms[ 'cameraProjectionMatrix' ].value.copy( this.camera.projectionMatrix );
- this.ssaoMaterial.uniforms[ 'cameraInverseProjectionMatrix' ].value.getInverse( this.camera.projectionMatrix );
-
- this.blurMaterial.uniforms[ 'resolution' ].value.set( width, height );
-
- },
-
- generateSampleKernel: function () {
-
- var kernelSize = this.kernelSize;
- var kernel = this.kernel;
-
- for ( var i = 0; i < kernelSize; i ++ ) {
-
- var sample = new THREE.Vector3();
- sample.x = ( Math.random() * 2 ) - 1;
- sample.y = ( Math.random() * 2 ) - 1;
- sample.z = Math.random();
-
- sample.normalize();
-
- var scale = i / kernelSize;
- scale = THREE.Math.lerp( 0.1, 1, scale * scale );
- sample.multiplyScalar( scale );
-
- kernel.push( sample );
-
- }
-
- },
-
- generateRandomKernelRotations: function () {
-
- var width = 4, height = 4;
-
- if ( SimplexNoise === undefined ) {
-
- console.error( 'THREE.SSAOPass: The pass relies on THREE.SimplexNoise.' );
-
- }
-
- var simplex = new SimplexNoise();
-
- var size = width * height;
- var data = new Float32Array( size * 4 );
-
- for ( var i = 0; i < size; i ++ ) {
-
- var stride = i * 4;
-
- var x = ( Math.random() * 2 ) - 1;
- var y = ( Math.random() * 2 ) - 1;
- var z = 0;
-
- var noise = simplex.noise3d( x, y, z );
-
- data[ stride ] = noise;
- data[ stride + 1 ] = noise;
- data[ stride + 2 ] = noise;
- data[ stride + 3 ] = 1;
-
- }
-
- this.noiseTexture = new THREE.DataTexture( data, width, height, THREE.RGBAFormat, THREE.FloatType );
- this.noiseTexture.wrapS = THREE.RepeatWrapping;
- this.noiseTexture.wrapT = THREE.RepeatWrapping;
- this.noiseTexture.needsUpdate = true;
-
- }
-
-} );
-
-THREE.SSAOPass.OUTPUT = {
- 'Default': 0,
- 'SSAO': 1,
- 'Blur': 2,
- 'Beauty': 3,
- 'Depth': 4,
- 'Normal': 5
-};
diff --git a/spaces/bankholdup/stylegan_petbreeder/e4e/criteria/lpips/__init__.py b/spaces/bankholdup/stylegan_petbreeder/e4e/criteria/lpips/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/bentrevett/emotion-prediction/app.py b/spaces/bentrevett/emotion-prediction/app.py
deleted file mode 100644
index 85a01123a594e766ccce9608df2d1f8f7e22bdf5..0000000000000000000000000000000000000000
--- a/spaces/bentrevett/emotion-prediction/app.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import streamlit as st
-import transformers
-import matplotlib.pyplot as plt
-
-
-@st.cache(allow_output_mutation=True, show_spinner=False)
-def get_pipe():
- model_name = "joeddav/distilbert-base-uncased-go-emotions-student"
- model = transformers.AutoModelForSequenceClassification.from_pretrained(model_name)
- tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
- pipe = transformers.pipeline('text-classification', model=model, tokenizer=tokenizer,
- return_all_scores=True, truncation=True)
- return pipe
-
-
-def sort_predictions(predictions):
- return sorted(predictions, key=lambda x: x['score'], reverse=True)
-
-
-st.set_page_config(page_title="Emotion Prediction")
-st.title("Emotion Prediction")
-st.write("Type text into the text box and then press 'Predict' to get the predicted emotion.")
-
-default_text = "I really love using HuggingFace Spaces!"
-
-text = st.text_area('Enter text here:', value=default_text)
-submit = st.button('Predict')
-
-with st.spinner("Loading model..."):
- pipe = get_pipe()
-
-if (submit and len(text.strip()) > 0) or len(text.strip()) > 0:
-
- prediction = pipe(text)[0]
- prediction = sort_predictions(prediction)
-
- fig, ax = plt.subplots()
- ax.bar(x=[i for i, _ in enumerate(prediction)],
- height=[p['score'] for p in prediction],
- tick_label=[p['label'] for p in prediction])
- ax.tick_params(rotation=90)
- ax.set_ylim(0, 1)
-
- st.header('Prediction:')
- st.pyplot(fig)
-
- prediction = dict([(p['label'], p['score']) for p in prediction])
- st.header('Raw values:')
- st.json(prediction)
diff --git a/spaces/bhaskartripathi/Text2Diagram/app.py b/spaces/bhaskartripathi/Text2Diagram/app.py
deleted file mode 100644
index 10dad40550502dfaa321d1e50eb54734e514f661..0000000000000000000000000000000000000000
--- a/spaces/bhaskartripathi/Text2Diagram/app.py
+++ /dev/null
@@ -1,72 +0,0 @@
-import gradio as gr
-import openai
-
-
-def generate_plantuml2(api_key, text):
- openai.api_key = api_key
- response = openai.ChatCompletion.create(
- model="gpt-3.5-turbo",
- messages=[
- {
- "role": "system",
- "content": "You are ChatGPT, a large language model trained by OpenAI. Generate PlantUML code for the following use case or code in natural language.",
- },
- {"role": "user", "content": text},
- ],
- )
- print(response)
- return response["choices"][0]["message"]["content"]
-
-def generate_plantuml(api_key, text):
- openai.api_key = api_key
- response = openai.ChatCompletion.create(
- model="gpt-3.5-turbo",
- messages=[
- {
- "role": "system",
- "content": "You are ChatGPT, a large language model trained by OpenAI. Generate PlantUML code for an architecture that uses Azure services and icons for the following use case or code in natural language. Choose the actors appropriately as per the use case. Do not use '!define SPRITESURL https://raw.githubusercontent.com/rabelenda/cicon-plantuml-sprites/v1.1.0/sprites' as it is outdated.",
- },
- {"role": "user", "content": text},
- ],
- )
- print(response)
- return response["choices"][0]["message"]["content"]
-
-sample_text = '''
-!define AzurePuml https://raw.githubusercontent.com/plantuml-stdlib/Azure-PlantUML/master/dist
-!includeurl AzurePuml/AzureCommon.puml
-
-actor Customer
-actor Restaurant
-
-Customer -> AzureAPIManagement : Login
-AzureAPIManagement -> AzureActiveDirectory : Authenticate User
-AzureActiveDirectory -> AzureAPIManagement : Return User Info
-
-Customer -> AzureAPIManagement : Place Order
-AzureAPIManagement -> AzureFunctionApp : Process Order
-AzureFunctionApp -> AzureCosmosDB : Store Order Data
-AzureFunctionApp -> Restaurant : Send Order Details
-Restaurant -> AzureFunctionApp : Update Order Status
-AzureFunctionApp -> AzureCosmosDB : Update Order Data
-AzureFunctionApp -> Customer : Send Order Status
-
-AzureFunctionApp -> AzureNotificationHubs : Send Push Notification
-
-legend right
- Online Food Ordering App Architecture
-endlegend
-'''
-
-iface = gr.Interface(
- fn=generate_plantuml,
- inputs=[
- gr.inputs.Textbox(label="OpenAI API Key"),
- gr.inputs.Textbox(label="Enter use case or code in natural language")
- ],
- outputs=gr.outputs.Textbox(label="Generated PlantUML Code"),
- title="PlantUML Code Generator",
- description="Generate PlantUML code using OpenAI's GPT-3.5-Turbo",
-)
-
-iface.launch()
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Atomic Mail Verifier Download Crack For Gta.md b/spaces/bioriAsaeru/text-to-voice/Atomic Mail Verifier Download Crack For Gta.md
deleted file mode 100644
index 7851da2331a8592089cdd05dfc0db2b56e13b57b..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Atomic Mail Verifier Download Crack For Gta.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
codigo limpio anaya pdf 5 fjali pohore dhe fjalit mohore.rar super excellent academic intelligence book pdf free download Bon Iver - Bon Iver(2011) [FLAC][DELUXE EDITION].rar Mindjet MindManager 2018 18.1.155 Crack .rar paradisebirds anna and nelly descarga crack para civilcad 2013 32 bits Dum Laga Ke Haisha hd 720p movie download billboard top 100 songs 2008 torrent Need For Speed NFS Most Wanted Black Edition repack Mr DJ download
tinker tailor soldier spy 720p download movie GraphicRiver CD Case Mock Up gta 4 full game highly compressed 100 mb free 19 Imgsrc ru password list X-Men: Apocalypse (English) 2 full movie download in 720p hd download Yeh Dillagi 720p hd awara bengali full movie 720p download 11 imbratisare in amurg sandra brown pdf download farming simulator 2015 crack multiplayer plant anatomy book by b p pandey pdf 241
-
download harry potter and the deathly hallows part 1 in hindi 720p Classical mechanics by gupta kumar sharma pdf Kuch Kuch Hota Hai kickass download movie el libro rojo de las matematicas moises villena pdf download download crack artcam 2008 torrent Paheli 720p in dual audio hindi download Prem movie in hindi 720p Adobe Photoshop CS6 v. 13.0 Keygen PASSWORD.txt.rar tktorrents tamil movies.com The Last Witch Hunter (English) 2 in hindi 720p torrent
-
Copyspider 1.1.16 key generator Young Strawberry-patch35-ira11 81 BD-Company BD-Team Lolitaguy lolita.14 crack solidworks 2012 64 bit windows 8 solid squad descargar tricalc 8.0 full espavol gratis Shirin Farhad Ki Toh Nikal Padi full movies 720p torrent mozilla firefox 3.8 download 2 chainz i'm different mp3 download Airlift movie 720p download movie astro vision lifesign software with crack mp4 sibel kekili porno indir
- aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Descargar Super Smash Bros Brawl Iso Gua completa para instalar y jugar en tu PC.md b/spaces/bioriAsaeru/text-to-voice/Descargar Super Smash Bros Brawl Iso Gua completa para instalar y jugar en tu PC.md
deleted file mode 100644
index 1d659b6e62b57cc29c54502852d39d3d540bc41a..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Descargar Super Smash Bros Brawl Iso Gua completa para instalar y jugar en tu PC.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Hypertherm Pronest 2012 rar 40 The Software That Supports All Major Brands Models and Cut Processes.md b/spaces/bioriAsaeru/text-to-voice/Hypertherm Pronest 2012 rar 40 The Software That Supports All Major Brands Models and Cut Processes.md
deleted file mode 100644
index c7c3bff51db8505dcd66dc0f8d15111660321ca7..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Hypertherm Pronest 2012 rar 40 The Software That Supports All Major Brands Models and Cut Processes.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Ssd Life Pro Registration Key How to Download and Activate the Most Advanced SSD Software.md b/spaces/cihyFjudo/fairness-paper-search/Ssd Life Pro Registration Key How to Download and Activate the Most Advanced SSD Software.md
deleted file mode 100644
index 80c9a44ac3d42cc7099b6fb4d80d8002d3e428c2..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Ssd Life Pro Registration Key How to Download and Activate the Most Advanced SSD Software.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
How long does your SSD last? Are you afraid of SSD failure? Read this post to get the answers. It will also tell you how to take care of your SSD to expand its life and how to protect your data with MiniTool Partition Wizard when facing SSD failure.
-
SSDs are now widely available in some of the mainstream PCs. Many of you may also have upgraded your hard drive from HDD to SSD or might plan to do that. Apparently, the performance of SSDs is better than that of HDDs. However, some people may be worried about SSD lifespan. How long do SSDs last? Now, read on to learn how to calculate SSD life.
TBW (Terabytes Written) indicates how much data a drive can write over its lifespan. For example, an SSD with 500 TBW means that the SSD can write 500 TB before it needs to be replaced.
-
The write amplification will shorten the SSD life a lot. Of course, in order to mitigate this problem, some new technologies are applied. For example: Wear Leveling and bad block management.
-
The wear leveling algorithm controls uneven "wear" of flash media sectors by distributing writes to multiple sectors. Thus, all sectors on a flash media can reach their endurance limit almost simultaneously, extending the life of the flash media
-
Many of you may also be interested in SSD vs HDD lifespan. How long do hard drives last? To answer this question, you should know about HDD lifespan, too. I only make a comparison between their lifespan in this post.
-
Actually, the HDD lifespan is longer than the SSD lifespan because the HDD adopts an overwriting method to write new data. However, many of you may find that the HDD will fail sooner than SSDs in most cases.
-
This operation will not cause problems with HDDs, but it will affect the efficiency of garbage-collection (GC) because some invalid data will be treated as valid data by SSDs. Hence, the SSD life will be shortened because it does more writes in GC.
-
For SSDs, disk defragmentation is unnecessary because SSDs don't require seek time. What's more, this kind of data moving without any benefit can greatly damage the SSD life. Therefore, you should disable it (it is usually used in Windows versions before Windows 8).
-
-
Struggling to find some? The Macrium Reflect free version performs well. Simply enter your email address and the company will send you a registration code and download link to use. From here, install the software on your Windows 11 computer and enter the registration code on the screen after it asks for a licence key, pressing Next to skip the licence key screen.
No matter what version you have, Windows is the home of your digital life. But all that shuffling, downloading, and browsing take its toll and can clog up your system quickly. AVG TuneUp can clear out years of grime, make your browsing speedier and lighter, and keep your favorite apps updated automatically. Enjoy an optimal Windows experience with AVG TuneUp.
-
Alisa is a professional English editor with 4-year experience. She loves writing and focuses on sharing detailed solutions and thoughts for computer problems, data recovery & backup, digital gadgets, tech news, etc. Through her articles, users can always easily get related problems solved and find what they want. In spare time, she likes basketball, badminton, tennis, cycling, running, and singing. She is very funny and energetic in life, and always brings friends lots of laughs.
-
SanDisk may, at its option, either: (1) repair or replace the Product with a new reconditioned or refurbished Product of equal or greater capacity, or another equivalent product; or (2) refund the current market value of the Product at the time the warranty claim is made to SanDisk, or the value as determined by regional requirements, if SanDisk is unable to repair or replace the Product. See below for regional requirements. In the case of replacements, SanDisk may replace the Product with one that was previously used, repaired, and tested to meet SanDisk specifications. SanDisk will not be liable for indirect or consequential damage (including loss of data), or for damage caused by improper use (including use in an incompatible device or manner and use otherwise not in accordance with the instructions), or by improper installation, unprofessional repair, modification or accident. This constitutes SanDisk's entire liability which will never exceed the price you paid for it, plus the necessary costs you made for the warranty claim. SanDisk products must not be used in applications where failure could threaten injury or life, such as life support systems. SANDISK DISCLAIMS ALL EXPRESS AND IMPLIED WARRANTIES TO THE FULLEST EXTENT PERMITTED BY LAW. IF SANDISK CANNOT DISCLAIM IMPLIED WARRANTIES UNDER APPLICABLE LAW, THEN TO THE EXTENT POSSIBLE, SUCH IMPLIED WARRANTIES ARE LIMITED TO THE DURATION OF THE EXPRESS WARRANTY. THE WARRANTY DURATION ON ANY REPLACED PRODUCT WILL BE THAT PORTION OF THE WARRANTY PERIOD REMAINING ON YOUR ORIGINAL PRODUCT. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OR LIMITATION OF INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO THE ABOVE EXCLUSION OR LIMITATION MAY NOT APPLY TO YOU.
-
SanDisk will not be liable for indirect or consequential damage (including loss of data), or for damage caused by improper use (including use in an incompatible device or manner and use otherwise not in accordance with the instructions), or by improper installation, unprofessional repair, modification or accident. This constitutes SanDisk's entire liability, which will never exceed the price you paid for it, plus the necessary costs you made for the warranty claim. SanDisk products must not be used in applications where failure could threaten injury or life, such as life support systems. SANDISK DISCLAIMS ALL EXPRESS AND IMPLIED WARRANTIES TO THE FULLEST EXTENT PERMITTED BY LAW. IF SANDISK CANNOT DISCLAIM IMPLIED WARRANTIES UNDER APPLICABLE LAW, THEN TO THE EXTENT POSSIBLE, SUCH IMPLIED WARRANTIES ARE LIMITED TO THE DURATION OF THE EXPRESS WARRANTY. THE WARRANTY DURATION ON ANY REPLACED PRODUCT WILL BE THAT PORTION OF THE WARRANTY PERIOD REMAINING ON YOUR ORIGINAL PRODUCT. SOME STATES (OR JURISDICTIONS) DO NOT ALLOW THE EXCLUSION OR LIMITATION OF INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO THE ABOVE EXCLUSION OR LIMITATION MAY NOT APPLY TO YOU. This limited warranty gives you specific legal rights. National, state and local laws may grant you other rights that are not affected by this warranty.
-
1. Manufacturer warrants to Customer that this Product, excluding content and/or software (if applicable) supplied with or within the Product, will, during the applicable Warranty Period (as defined below) specified below under normal use conditions, (a) be free from material defects in material or workmanship and, (b) will materially conform to Manufacturer's published product specifications. A Product will be considered to have a material defect or to be materially defective only if such Product does not meet the stated design lifetime (up to the applicable Warranty Period) and is returned to the appropriate location within the Warranty Period and subject to applicable performance threshold information contained in the Product's datasheet (as produced or provided by Manufacturer).
-
7. Manufacturer products, including the Product, must not be used in applications where failure could threaten injury or life, such as aviation, automotive, nuclear, medical or life support systems (or any other form of ultra-hazardous applications), and under no circumstances shall Manufacturer have any Warranty or other obligations arising from any such Product uses. NOTWITHSTANDING ANYTHING TO THE CONTRARY, AS PRODUCTS HAVE VARIED FAILURE RATES AND A LIMITED USEFUL LIFE PERIOD WITH ENDURANCE LIMITS, THE RESPONSIBILITY FOR THE DESIGN, MANUFACTURING AND ADEQUATE TESTING OF CUSTOMER'S APPLICATIONS, SYSTEMS AND DEVICES USING PRODUCTS HEREIN LIES WITH CUSTOMER WHERE FAILURE OF THE PRODUCT COULD RESULT, DIRECTLY OR INDIRECTLY, IN DEATH, PERSONAL INJURY, OR SEVERE PROPERTY OR ENVIRONMENTAL DAMAGE, INCLUDING WITHOUT LIMITATION, AS CRITICAL COMPONENTS IN MEDICAL DEVICES, LIFE SUPPORT DEVICES, AUTOMOTIVE AND OTHER CRITICAL APPLICATIONS ("CRITICAL APPLICATIONS"). CUSTOMER IS RESPONSIBLE FOR PUTTING IN PLACE SAFETY MEASURES AND APPROPRIATE REDUNDANCIES, FAULT TOLERANT AND BACK-UP FEATURES SUFFICIENT TO PROTECT END USERS FROM ANY RISK OF DAMAGE, INJURY OR DEATH RESULTING FROM ANY FAILURE OR ANY OTHER ISSUE IN THE UTILIZATION OF THE PRODUCTS IN OR BY CUSTOMER'S APPLICATIONS, SYSTEMS OR DEVICES WHEN DEALING WITH CRITICAL APPLICATIONS AND SHALL OTHERWISE PROVIDE END USERS WITH ALERTS, USE AND MAINTENANCE INSTRUCTIONS TO AVOID SUCH RISKS.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/pens/freetypePen.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/pens/freetypePen.py
deleted file mode 100644
index 870776bc7bf23230ff03d0185cb766f48180bce9..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/pens/freetypePen.py
+++ /dev/null
@@ -1,458 +0,0 @@
-# -*- coding: utf-8 -*-
-
-"""Pen to rasterize paths with FreeType."""
-
-__all__ = ["FreeTypePen"]
-
-import os
-import ctypes
-import platform
-import subprocess
-import collections
-import math
-
-import freetype
-from freetype.raw import FT_Outline_Get_Bitmap, FT_Outline_Get_BBox, FT_Outline_Get_CBox
-from freetype.ft_types import FT_Pos
-from freetype.ft_structs import FT_Vector, FT_BBox, FT_Bitmap, FT_Outline
-from freetype.ft_enums import (
- FT_OUTLINE_NONE,
- FT_OUTLINE_EVEN_ODD_FILL,
- FT_PIXEL_MODE_GRAY,
- FT_CURVE_TAG_ON,
- FT_CURVE_TAG_CONIC,
- FT_CURVE_TAG_CUBIC,
-)
-from freetype.ft_errors import FT_Exception
-
-from fontTools.pens.basePen import BasePen, PenError
-from fontTools.misc.roundTools import otRound
-from fontTools.misc.transform import Transform
-
-Contour = collections.namedtuple("Contour", ("points", "tags"))
-
-
-class FreeTypePen(BasePen):
- """Pen to rasterize paths with FreeType. Requires `freetype-py` module.
-
- Constructs ``FT_Outline`` from the paths, and renders it within a bitmap
- buffer.
-
- For ``array()`` and ``show()``, `numpy` and `matplotlib` must be installed.
- For ``image()``, `Pillow` is required. Each module is lazily loaded when the
- corresponding method is called.
-
- Args:
- glyphSet: a dictionary of drawable glyph objects keyed by name
- used to resolve component references in composite glyphs.
-
- :Examples:
- If `numpy` and `matplotlib` is available, the following code will
- show the glyph image of `fi` in a new window::
-
- from fontTools.ttLib import TTFont
- from fontTools.pens.freetypePen import FreeTypePen
- from fontTools.misc.transform import Offset
- pen = FreeTypePen(None)
- font = TTFont('SourceSansPro-Regular.otf')
- glyph = font.getGlyphSet()['fi']
- glyph.draw(pen)
- width, ascender, descender = glyph.width, font['OS/2'].usWinAscent, -font['OS/2'].usWinDescent
- height = ascender - descender
- pen.show(width=width, height=height, transform=Offset(0, -descender))
-
- Combining with `uharfbuzz`, you can typeset a chunk of glyphs in a pen::
-
- import uharfbuzz as hb
- from fontTools.pens.freetypePen import FreeTypePen
- from fontTools.pens.transformPen import TransformPen
- from fontTools.misc.transform import Offset
-
- en1, en2, ar, ja = 'Typesetting', 'Jeff', 'صف الحروف', 'たいぷせっと'
- for text, font_path, direction, typo_ascender, typo_descender, vhea_ascender, vhea_descender, contain, features in (
- (en1, 'NotoSans-Regular.ttf', 'ltr', 2189, -600, None, None, False, {"kern": True, "liga": True}),
- (en2, 'NotoSans-Regular.ttf', 'ltr', 2189, -600, None, None, True, {"kern": True, "liga": True}),
- (ar, 'NotoSansArabic-Regular.ttf', 'rtl', 1374, -738, None, None, False, {"kern": True, "liga": True}),
- (ja, 'NotoSansJP-Regular.otf', 'ltr', 880, -120, 500, -500, False, {"palt": True, "kern": True}),
- (ja, 'NotoSansJP-Regular.otf', 'ttb', 880, -120, 500, -500, False, {"vert": True, "vpal": True, "vkrn": True})
- ):
- blob = hb.Blob.from_file_path(font_path)
- face = hb.Face(blob)
- font = hb.Font(face)
- buf = hb.Buffer()
- buf.direction = direction
- buf.add_str(text)
- buf.guess_segment_properties()
- hb.shape(font, buf, features)
-
- x, y = 0, 0
- pen = FreeTypePen(None)
- for info, pos in zip(buf.glyph_infos, buf.glyph_positions):
- gid = info.codepoint
- transformed = TransformPen(pen, Offset(x + pos.x_offset, y + pos.y_offset))
- font.draw_glyph_with_pen(gid, transformed)
- x += pos.x_advance
- y += pos.y_advance
-
- offset, width, height = None, None, None
- if direction in ('ltr', 'rtl'):
- offset = (0, -typo_descender)
- width = x
- height = typo_ascender - typo_descender
- else:
- offset = (-vhea_descender, -y)
- width = vhea_ascender - vhea_descender
- height = -y
- pen.show(width=width, height=height, transform=Offset(*offset), contain=contain)
-
- For Jupyter Notebook, the rendered image will be displayed in a cell if
- you replace ``show()`` with ``image()`` in the examples.
- """
-
- def __init__(self, glyphSet):
- BasePen.__init__(self, glyphSet)
- self.contours = []
-
- def outline(self, transform=None, evenOdd=False):
- """Converts the current contours to ``FT_Outline``.
-
- Args:
- transform: An optional 6-tuple containing an affine transformation,
- or a ``Transform`` object from the ``fontTools.misc.transform``
- module.
- evenOdd: Pass ``True`` for even-odd fill instead of non-zero.
- """
- transform = transform or Transform()
- if not hasattr(transform, "transformPoint"):
- transform = Transform(*transform)
- n_contours = len(self.contours)
- n_points = sum((len(contour.points) for contour in self.contours))
- points = []
- for contour in self.contours:
- for point in contour.points:
- point = transform.transformPoint(point)
- points.append(
- FT_Vector(
- FT_Pos(otRound(point[0] * 64)), FT_Pos(otRound(point[1] * 64))
- )
- )
- tags = []
- for contour in self.contours:
- for tag in contour.tags:
- tags.append(tag)
- contours = []
- contours_sum = 0
- for contour in self.contours:
- contours_sum += len(contour.points)
- contours.append(contours_sum - 1)
- flags = FT_OUTLINE_EVEN_ODD_FILL if evenOdd else FT_OUTLINE_NONE
- return FT_Outline(
- (ctypes.c_short)(n_contours),
- (ctypes.c_short)(n_points),
- (FT_Vector * n_points)(*points),
- (ctypes.c_ubyte * n_points)(*tags),
- (ctypes.c_short * n_contours)(*contours),
- (ctypes.c_int)(flags),
- )
-
- def buffer(
- self, width=None, height=None, transform=None, contain=False, evenOdd=False
- ):
- """Renders the current contours within a bitmap buffer.
-
- Args:
- width: Image width of the bitmap in pixels. If omitted, it
- automatically fits to the bounding box of the contours.
- height: Image height of the bitmap in pixels. If omitted, it
- automatically fits to the bounding box of the contours.
- transform: An optional 6-tuple containing an affine transformation,
- or a ``Transform`` object from the ``fontTools.misc.transform``
- module. The bitmap size is not affected by this matrix.
- contain: If ``True``, the image size will be automatically expanded
- so that it fits to the bounding box of the paths. Useful for
- rendering glyphs with negative sidebearings without clipping.
- evenOdd: Pass ``True`` for even-odd fill instead of non-zero.
-
- Returns:
- A tuple of ``(buffer, size)``, where ``buffer`` is a ``bytes``
- object of the resulted bitmap and ``size`` is a 2-tuple of its
- dimension.
-
- :Notes:
- The image size should always be given explicitly if you need to get
- a proper glyph image. When ``width`` and ``height`` are omitted, it
- forcifully fits to the bounding box and the side bearings get
- cropped. If you pass ``0`` to both ``width`` and ``height`` and set
- ``contain`` to ``True``, it expands to the bounding box while
- maintaining the origin of the contours, meaning that LSB will be
- maintained but RSB won’t. The difference between the two becomes
- more obvious when rotate or skew transformation is applied.
-
- :Example:
- .. code-block::
-
- >> pen = FreeTypePen(None)
- >> glyph.draw(pen)
- >> buf, size = pen.buffer(width=500, height=1000)
- >> type(buf), len(buf), size
- (, 500000, (500, 1000))
-
- """
- transform = transform or Transform()
- if not hasattr(transform, "transformPoint"):
- transform = Transform(*transform)
- contain_x, contain_y = contain or width is None, contain or height is None
- if contain_x or contain_y:
- dx, dy = transform.dx, transform.dy
- bbox = self.bbox
- p1, p2, p3, p4 = (
- transform.transformPoint((bbox[0], bbox[1])),
- transform.transformPoint((bbox[2], bbox[1])),
- transform.transformPoint((bbox[0], bbox[3])),
- transform.transformPoint((bbox[2], bbox[3])),
- )
- px, py = (p1[0], p2[0], p3[0], p4[0]), (p1[1], p2[1], p3[1], p4[1])
- if contain_x:
- if width is None:
- dx = dx - min(*px)
- width = max(*px) - min(*px)
- else:
- dx = dx - min(min(*px), 0.0)
- width = max(width, max(*px) - min(min(*px), 0.0))
- if contain_y:
- if height is None:
- dy = dy - min(*py)
- height = max(*py) - min(*py)
- else:
- dy = dy - min(min(*py), 0.0)
- height = max(height, max(*py) - min(min(*py), 0.0))
- transform = Transform(*transform[:4], dx, dy)
- width, height = math.ceil(width), math.ceil(height)
- buf = ctypes.create_string_buffer(width * height)
- bitmap = FT_Bitmap(
- (ctypes.c_int)(height),
- (ctypes.c_int)(width),
- (ctypes.c_int)(width),
- (ctypes.POINTER(ctypes.c_ubyte))(buf),
- (ctypes.c_short)(256),
- (ctypes.c_ubyte)(FT_PIXEL_MODE_GRAY),
- (ctypes.c_char)(0),
- (ctypes.c_void_p)(None),
- )
- outline = self.outline(transform=transform, evenOdd=evenOdd)
- err = FT_Outline_Get_Bitmap(
- freetype.get_handle(), ctypes.byref(outline), ctypes.byref(bitmap)
- )
- if err != 0:
- raise FT_Exception(err)
- return buf.raw, (width, height)
-
- def array(
- self, width=None, height=None, transform=None, contain=False, evenOdd=False
- ):
- """Returns the rendered contours as a numpy array. Requires `numpy`.
-
- Args:
- width: Image width of the bitmap in pixels. If omitted, it
- automatically fits to the bounding box of the contours.
- height: Image height of the bitmap in pixels. If omitted, it
- automatically fits to the bounding box of the contours.
- transform: An optional 6-tuple containing an affine transformation,
- or a ``Transform`` object from the ``fontTools.misc.transform``
- module. The bitmap size is not affected by this matrix.
- contain: If ``True``, the image size will be automatically expanded
- so that it fits to the bounding box of the paths. Useful for
- rendering glyphs with negative sidebearings without clipping.
- evenOdd: Pass ``True`` for even-odd fill instead of non-zero.
-
- Returns:
- A ``numpy.ndarray`` object with a shape of ``(height, width)``.
- Each element takes a value in the range of ``[0.0, 1.0]``.
-
- :Notes:
- The image size should always be given explicitly if you need to get
- a proper glyph image. When ``width`` and ``height`` are omitted, it
- forcifully fits to the bounding box and the side bearings get
- cropped. If you pass ``0`` to both ``width`` and ``height`` and set
- ``contain`` to ``True``, it expands to the bounding box while
- maintaining the origin of the contours, meaning that LSB will be
- maintained but RSB won’t. The difference between the two becomes
- more obvious when rotate or skew transformation is applied.
-
- :Example:
- .. code-block::
-
- >> pen = FreeTypePen(None)
- >> glyph.draw(pen)
- >> arr = pen.array(width=500, height=1000)
- >> type(a), a.shape
- (, (1000, 500))
- """
- import numpy as np
-
- buf, size = self.buffer(
- width=width,
- height=height,
- transform=transform,
- contain=contain,
- evenOdd=evenOdd,
- )
- return np.frombuffer(buf, "B").reshape((size[1], size[0])) / 255.0
-
- def show(
- self, width=None, height=None, transform=None, contain=False, evenOdd=False
- ):
- """Plots the rendered contours with `pyplot`. Requires `numpy` and
- `matplotlib`.
-
- Args:
- width: Image width of the bitmap in pixels. If omitted, it
- automatically fits to the bounding box of the contours.
- height: Image height of the bitmap in pixels. If omitted, it
- automatically fits to the bounding box of the contours.
- transform: An optional 6-tuple containing an affine transformation,
- or a ``Transform`` object from the ``fontTools.misc.transform``
- module. The bitmap size is not affected by this matrix.
- contain: If ``True``, the image size will be automatically expanded
- so that it fits to the bounding box of the paths. Useful for
- rendering glyphs with negative sidebearings without clipping.
- evenOdd: Pass ``True`` for even-odd fill instead of non-zero.
-
- :Notes:
- The image size should always be given explicitly if you need to get
- a proper glyph image. When ``width`` and ``height`` are omitted, it
- forcifully fits to the bounding box and the side bearings get
- cropped. If you pass ``0`` to both ``width`` and ``height`` and set
- ``contain`` to ``True``, it expands to the bounding box while
- maintaining the origin of the contours, meaning that LSB will be
- maintained but RSB won’t. The difference between the two becomes
- more obvious when rotate or skew transformation is applied.
-
- :Example:
- .. code-block::
-
- >> pen = FreeTypePen(None)
- >> glyph.draw(pen)
- >> pen.show(width=500, height=1000)
- """
- from matplotlib import pyplot as plt
-
- a = self.array(
- width=width,
- height=height,
- transform=transform,
- contain=contain,
- evenOdd=evenOdd,
- )
- plt.imshow(a, cmap="gray_r", vmin=0, vmax=1)
- plt.show()
-
- def image(
- self, width=None, height=None, transform=None, contain=False, evenOdd=False
- ):
- """Returns the rendered contours as a PIL image. Requires `Pillow`.
- Can be used to display a glyph image in Jupyter Notebook.
-
- Args:
- width: Image width of the bitmap in pixels. If omitted, it
- automatically fits to the bounding box of the contours.
- height: Image height of the bitmap in pixels. If omitted, it
- automatically fits to the bounding box of the contours.
- transform: An optional 6-tuple containing an affine transformation,
- or a ``Transform`` object from the ``fontTools.misc.transform``
- module. The bitmap size is not affected by this matrix.
- contain: If ``True``, the image size will be automatically expanded
- so that it fits to the bounding box of the paths. Useful for
- rendering glyphs with negative sidebearings without clipping.
- evenOdd: Pass ``True`` for even-odd fill instead of non-zero.
-
- Returns:
- A ``PIL.image`` object. The image is filled in black with alpha
- channel obtained from the rendered bitmap.
-
- :Notes:
- The image size should always be given explicitly if you need to get
- a proper glyph image. When ``width`` and ``height`` are omitted, it
- forcifully fits to the bounding box and the side bearings get
- cropped. If you pass ``0`` to both ``width`` and ``height`` and set
- ``contain`` to ``True``, it expands to the bounding box while
- maintaining the origin of the contours, meaning that LSB will be
- maintained but RSB won’t. The difference between the two becomes
- more obvious when rotate or skew transformation is applied.
-
- :Example:
- .. code-block::
-
- >> pen = FreeTypePen(None)
- >> glyph.draw(pen)
- >> img = pen.image(width=500, height=1000)
- >> type(img), img.size
- (, (500, 1000))
- """
- from PIL import Image
-
- buf, size = self.buffer(
- width=width,
- height=height,
- transform=transform,
- contain=contain,
- evenOdd=evenOdd,
- )
- img = Image.new("L", size, 0)
- img.putalpha(Image.frombuffer("L", size, buf))
- return img
-
- @property
- def bbox(self):
- """Computes the exact bounding box of an outline.
-
- Returns:
- A tuple of ``(xMin, yMin, xMax, yMax)``.
- """
- bbox = FT_BBox()
- outline = self.outline()
- FT_Outline_Get_BBox(ctypes.byref(outline), ctypes.byref(bbox))
- return (bbox.xMin / 64.0, bbox.yMin / 64.0, bbox.xMax / 64.0, bbox.yMax / 64.0)
-
- @property
- def cbox(self):
- """Returns an outline's ‘control box’.
-
- Returns:
- A tuple of ``(xMin, yMin, xMax, yMax)``.
- """
- cbox = FT_BBox()
- outline = self.outline()
- FT_Outline_Get_CBox(ctypes.byref(outline), ctypes.byref(cbox))
- return (cbox.xMin / 64.0, cbox.yMin / 64.0, cbox.xMax / 64.0, cbox.yMax / 64.0)
-
- def _moveTo(self, pt):
- contour = Contour([], [])
- self.contours.append(contour)
- contour.points.append(pt)
- contour.tags.append(FT_CURVE_TAG_ON)
-
- def _lineTo(self, pt):
- if not (self.contours and len(self.contours[-1].points) > 0):
- raise PenError("Contour missing required initial moveTo")
- contour = self.contours[-1]
- contour.points.append(pt)
- contour.tags.append(FT_CURVE_TAG_ON)
-
- def _curveToOne(self, p1, p2, p3):
- if not (self.contours and len(self.contours[-1].points) > 0):
- raise PenError("Contour missing required initial moveTo")
- t1, t2, t3 = FT_CURVE_TAG_CUBIC, FT_CURVE_TAG_CUBIC, FT_CURVE_TAG_ON
- contour = self.contours[-1]
- for p, t in ((p1, t1), (p2, t2), (p3, t3)):
- contour.points.append(p)
- contour.tags.append(t)
-
- def _qCurveToOne(self, p1, p2):
- if not (self.contours and len(self.contours[-1].points) > 0):
- raise PenError("Contour missing required initial moveTo")
- t1, t2 = FT_CURVE_TAG_CONIC, FT_CURVE_TAG_ON
- contour = self.contours[-1]
- for p, t in ((p1, t1), (p2, t2)):
- contour.points.append(p)
- contour.tags.append(t)
diff --git a/spaces/codelion/Grounding_DINO_demo/README.md b/spaces/codelion/Grounding_DINO_demo/README.md
deleted file mode 100644
index 081e39d1a209013fc2a5342efc9b1307923488c8..0000000000000000000000000000000000000000
--- a/spaces/codelion/Grounding_DINO_demo/README.md
+++ /dev/null
@@ -1,126 +0,0 @@
----
-title: Grounding DINO Demo
-emoji: 💻
-colorFrom: purple
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-# Grounding DINO
-[📃Paper](https://arxiv.org/abs/2303.05499) |
-[📽️Video](https://www.youtube.com/watch?v=wxWDt5UiwY8) |
-[🗯️ Github](https://github.com/IDEA-Research/GroundingDINO) |
-[📯Demo on Colab](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-grounding-dino.ipynb) |
-[🤗Demo on HF (Coming soon)]()
-
-[](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-grounding-dino.ipynb) \
-[](https://paperswithcode.com/sota/zero-shot-object-detection-on-mscoco?p=grounding-dino-marrying-dino-with-grounded) \
-[](https://paperswithcode.com/sota/zero-shot-object-detection-on-odinw?p=grounding-dino-marrying-dino-with-grounded) \
-[](https://paperswithcode.com/sota/object-detection-on-coco-minival?p=grounding-dino-marrying-dino-with-grounded) \
-[](https://paperswithcode.com/sota/object-detection-on-coco?p=grounding-dino-marrying-dino-with-grounded)
-
-
-
-Official pytorch implementation of [Grounding DINO](https://arxiv.org/abs/2303.05499), a stronger open-set object detector. Code is available now!
-
-
-## Highlight
-
-- **Open-Set Detection.** Detect **everything** with language!
-- **High Performancce.** COCO zero-shot **52.5 AP** (training without COCO data!). COCO fine-tune **63.0 AP**.
-- **Flexible.** Collaboration with Stable Diffusion for Image Editting.
-
-## News
-[2023/03/27] Support CPU-only mode. Now the model can run on machines without GPUs.\
-[2023/03/25] A [demo](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-grounding-dino.ipynb) for Grounding DINO is available at Colab. Thanks to @Piotr! \
-[2023/03/22] Code is available Now!
-
-
-
-## TODO
-
-- [x] Release inference code and demo.
-- [x] Release checkpoints.
-- [ ] Grounding DINO with Stable Diffusion and GLIGEN demos.
-- [ ] Release training codes.
-
-## Install
-
-If you have a CUDA environment, please make sure the environment variable `CUDA_HOME` is set. It will be compiled under CPU-only mode if no CUDA available.
-
-```bash
-pip install -e .
-```
-
-## Demo
-
-```bash
-CUDA_VISIBLE_DEVICES=6 python demo/inference_on_a_image.py \
- -c /path/to/config \
- -p /path/to/checkpoint \
- -i .asset/cats.png \
- -o "outputs/0" \
- -t "cat ear." \
- [--cpu-only] # open it for cpu mode
-```
-See the `demo/inference_on_a_image.py` for more details.
-
-## Checkpoints
-
-
-
-
-
-
-## Acknowledgement
-
-Our model is related to [DINO](https://github.com/IDEA-Research/DINO) and [GLIP](https://github.com/microsoft/GLIP). Thanks for their great work!
-
-We also thank great previous work including DETR, Deformable DETR, SMCA, Conditional DETR, Anchor DETR, Dynamic DETR, DAB-DETR, DN-DETR, etc. More related work are available at [Awesome Detection Transformer](https://github.com/IDEACVR/awesome-detection-transformer). A new toolbox [detrex](https://github.com/IDEA-Research/detrex) is available as well.
-
-Thanks [Stable Diffusion](https://github.com/Stability-AI/StableDiffusion) and [GLIGEN](https://github.com/gligen/GLIGEN) for their awesome models.
-
-
-## Citation
-
-If you find our work helpful for your research, please consider citing the following BibTeX entry.
-
-```bibtex
-@inproceedings{ShilongLiu2023GroundingDM,
- title={Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection},
- author={Shilong Liu and Zhaoyang Zeng and Tianhe Ren and Feng Li and Hao Zhang and Jie Yang and Chunyuan Li and Jianwei Yang and Hang Su and Jun Zhu and Lei Zhang},
- year={2023}
-}
-```
-
-
-
-
-
diff --git a/spaces/colakin/video-generater/public/ffmpeg/compat/w32pthreads.h b/spaces/colakin/video-generater/public/ffmpeg/compat/w32pthreads.h
deleted file mode 100644
index 6405e72b64f70b15e284c19734c26361846e95b9..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/compat/w32pthreads.h
+++ /dev/null
@@ -1,191 +0,0 @@
-/*
- * Copyright (C) 2010-2011 x264 project
- *
- * Authors: Steven Walters
- * Pegasys Inc.
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * w32threads to pthreads wrapper
- */
-
-#ifndef COMPAT_W32PTHREADS_H
-#define COMPAT_W32PTHREADS_H
-
-/* Build up a pthread-like API using underlying Windows API. Have only static
- * methods so as to not conflict with a potentially linked in pthread-win32
- * library.
- * As most functions here are used without checking return values,
- * only implement return values as necessary. */
-
-#define WIN32_LEAN_AND_MEAN
-#include
-#include
-#include
-
-#include "libavutil/attributes.h"
-#include "libavutil/common.h"
-#include "libavutil/internal.h"
-#include "libavutil/mem.h"
-#include "libavutil/time.h"
-
-typedef struct pthread_t {
- void *handle;
- void *(*func)(void* arg);
- void *arg;
- void *ret;
-} pthread_t;
-
-/* use light weight mutex/condition variable API for Windows Vista and later */
-typedef SRWLOCK pthread_mutex_t;
-typedef CONDITION_VARIABLE pthread_cond_t;
-
-#define PTHREAD_MUTEX_INITIALIZER SRWLOCK_INIT
-#define PTHREAD_COND_INITIALIZER CONDITION_VARIABLE_INIT
-
-#define InitializeCriticalSection(x) InitializeCriticalSectionEx(x, 0, 0)
-#define WaitForSingleObject(a, b) WaitForSingleObjectEx(a, b, FALSE)
-
-#define PTHREAD_CANCEL_ENABLE 1
-#define PTHREAD_CANCEL_DISABLE 0
-
-static av_unused unsigned __stdcall attribute_align_arg win32thread_worker(void *arg)
-{
- pthread_t *h = (pthread_t*)arg;
- h->ret = h->func(h->arg);
- return 0;
-}
-
-static av_unused int pthread_create(pthread_t *thread, const void *unused_attr,
- void *(*start_routine)(void*), void *arg)
-{
- thread->func = start_routine;
- thread->arg = arg;
-#if HAVE_WINRT
- thread->handle = (void*)CreateThread(NULL, 0, win32thread_worker, thread,
- 0, NULL);
-#else
- thread->handle = (void*)_beginthreadex(NULL, 0, win32thread_worker, thread,
- 0, NULL);
-#endif
- return !thread->handle;
-}
-
-static av_unused int pthread_join(pthread_t thread, void **value_ptr)
-{
- DWORD ret = WaitForSingleObject(thread.handle, INFINITE);
- if (ret != WAIT_OBJECT_0) {
- if (ret == WAIT_ABANDONED)
- return EINVAL;
- else
- return EDEADLK;
- }
- if (value_ptr)
- *value_ptr = thread.ret;
- CloseHandle(thread.handle);
- return 0;
-}
-
-static inline int pthread_mutex_init(pthread_mutex_t *m, void* attr)
-{
- InitializeSRWLock(m);
- return 0;
-}
-static inline int pthread_mutex_destroy(pthread_mutex_t *m)
-{
- /* Unlocked SWR locks use no resources */
- return 0;
-}
-static inline int pthread_mutex_lock(pthread_mutex_t *m)
-{
- AcquireSRWLockExclusive(m);
- return 0;
-}
-static inline int pthread_mutex_unlock(pthread_mutex_t *m)
-{
- ReleaseSRWLockExclusive(m);
- return 0;
-}
-
-typedef INIT_ONCE pthread_once_t;
-#define PTHREAD_ONCE_INIT INIT_ONCE_STATIC_INIT
-
-static av_unused int pthread_once(pthread_once_t *once_control, void (*init_routine)(void))
-{
- BOOL pending = FALSE;
- InitOnceBeginInitialize(once_control, 0, &pending, NULL);
- if (pending)
- init_routine();
- InitOnceComplete(once_control, 0, NULL);
- return 0;
-}
-
-static inline int pthread_cond_init(pthread_cond_t *cond, const void *unused_attr)
-{
- InitializeConditionVariable(cond);
- return 0;
-}
-
-/* native condition variables do not destroy */
-static inline int pthread_cond_destroy(pthread_cond_t *cond)
-{
- return 0;
-}
-
-static inline int pthread_cond_broadcast(pthread_cond_t *cond)
-{
- WakeAllConditionVariable(cond);
- return 0;
-}
-
-static inline int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex)
-{
- SleepConditionVariableSRW(cond, mutex, INFINITE, 0);
- return 0;
-}
-
-static inline int pthread_cond_timedwait(pthread_cond_t *cond, pthread_mutex_t *mutex,
- const struct timespec *abstime)
-{
- int64_t abs_milli = abstime->tv_sec * 1000LL + abstime->tv_nsec / 1000000;
- DWORD t = av_clip64(abs_milli - av_gettime() / 1000, 0, UINT32_MAX);
-
- if (!SleepConditionVariableSRW(cond, mutex, t, 0)) {
- DWORD err = GetLastError();
- if (err == ERROR_TIMEOUT)
- return ETIMEDOUT;
- else
- return EINVAL;
- }
- return 0;
-}
-
-static inline int pthread_cond_signal(pthread_cond_t *cond)
-{
- WakeConditionVariable(cond);
- return 0;
-}
-
-static inline int pthread_setcancelstate(int state, int *oldstate)
-{
- return 0;
-}
-
-#endif /* COMPAT_W32PTHREADS_H */
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/pixblockdsp_mmi.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/pixblockdsp_mmi.c
deleted file mode 100644
index 1230f5de8869b8a0c9b7a5a72aafa914f00c81ab..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/pixblockdsp_mmi.c
+++ /dev/null
@@ -1,135 +0,0 @@
-/*
- * Loongson SIMD optimized pixblockdsp
- *
- * Copyright (c) 2015 Loongson Technology Corporation Limited
- * Copyright (c) 2015 Zhou Xiaoyong
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include "pixblockdsp_mips.h"
-#include "libavutil/mips/asmdefs.h"
-#include "libavutil/mips/mmiutils.h"
-
-void ff_get_pixels_8_mmi(int16_t *av_restrict block, const uint8_t *pixels,
- ptrdiff_t stride)
-{
- double ftmp[7];
- DECLARE_VAR_ALL64;
- DECLARE_VAR_ADDRT;
-
- __asm__ volatile (
- "pxor %[ftmp0], %[ftmp0], %[ftmp0] \n\t"
-
- MMI_LDC1(%[ftmp1], %[pixels], 0x00)
- MMI_LDXC1(%[ftmp2], %[pixels], %[stride], 0x00)
- "punpcklbh %[ftmp3], %[ftmp1], %[ftmp0] \n\t"
- "punpckhbh %[ftmp4], %[ftmp1], %[ftmp0] \n\t"
- "punpcklbh %[ftmp5], %[ftmp2], %[ftmp0] \n\t"
- "punpckhbh %[ftmp6], %[ftmp2], %[ftmp0] \n\t"
- MMI_SDC1(%[ftmp3], %[block], 0x00)
- MMI_SDC1(%[ftmp4], %[block], 0x08)
- MMI_SDC1(%[ftmp5], %[block], 0x10)
- MMI_SDC1(%[ftmp6], %[block], 0x18)
- PTR_ADDU "%[pixels], %[pixels], %[stride_x2] \n\t"
-
- MMI_LDC1(%[ftmp1], %[pixels], 0x00)
- MMI_LDXC1(%[ftmp2], %[pixels], %[stride], 0x00)
- "punpcklbh %[ftmp3], %[ftmp1], %[ftmp0] \n\t"
- "punpckhbh %[ftmp4], %[ftmp1], %[ftmp0] \n\t"
- "punpcklbh %[ftmp5], %[ftmp2], %[ftmp0] \n\t"
- "punpckhbh %[ftmp6], %[ftmp2], %[ftmp0] \n\t"
- MMI_SDC1(%[ftmp3], %[block], 0x20)
- MMI_SDC1(%[ftmp4], %[block], 0x28)
- MMI_SDC1(%[ftmp5], %[block], 0x30)
- MMI_SDC1(%[ftmp6], %[block], 0x38)
- PTR_ADDU "%[pixels], %[pixels], %[stride_x2] \n\t"
-
- MMI_LDC1(%[ftmp1], %[pixels], 0x00)
- MMI_LDXC1(%[ftmp2], %[pixels], %[stride], 0x00)
- "punpcklbh %[ftmp3], %[ftmp1], %[ftmp0] \n\t"
- "punpckhbh %[ftmp4], %[ftmp1], %[ftmp0] \n\t"
- "punpcklbh %[ftmp5], %[ftmp2], %[ftmp0] \n\t"
- "punpckhbh %[ftmp6], %[ftmp2], %[ftmp0] \n\t"
- MMI_SDC1(%[ftmp3], %[block], 0x40)
- MMI_SDC1(%[ftmp4], %[block], 0x48)
- MMI_SDC1(%[ftmp5], %[block], 0x50)
- MMI_SDC1(%[ftmp6], %[block], 0x58)
- PTR_ADDU "%[pixels], %[pixels], %[stride_x2] \n\t"
-
- MMI_LDC1(%[ftmp1], %[pixels], 0x00)
- MMI_LDXC1(%[ftmp2], %[pixels], %[stride], 0x00)
- "punpcklbh %[ftmp3], %[ftmp1], %[ftmp0] \n\t"
- "punpckhbh %[ftmp4], %[ftmp1], %[ftmp0] \n\t"
- "punpcklbh %[ftmp5], %[ftmp2], %[ftmp0] \n\t"
- "punpckhbh %[ftmp6], %[ftmp2], %[ftmp0] \n\t"
- MMI_SDC1(%[ftmp3], %[block], 0x60)
- MMI_SDC1(%[ftmp4], %[block], 0x68)
- MMI_SDC1(%[ftmp5], %[block], 0x70)
- MMI_SDC1(%[ftmp6], %[block], 0x78)
- : [ftmp0]"=&f"(ftmp[0]), [ftmp1]"=&f"(ftmp[1]),
- [ftmp2]"=&f"(ftmp[2]), [ftmp3]"=&f"(ftmp[3]),
- [ftmp4]"=&f"(ftmp[4]), [ftmp5]"=&f"(ftmp[5]),
- [ftmp6]"=&f"(ftmp[6]),
- RESTRICT_ASM_ALL64
- RESTRICT_ASM_ADDRT
- [pixels]"+&r"(pixels)
- : [block]"r"((mips_reg)block), [stride]"r"((mips_reg)stride),
- [stride_x2]"r"((mips_reg)(stride<<1))
- : "memory"
- );
-}
-
-void ff_diff_pixels_mmi(int16_t *av_restrict block, const uint8_t *src1,
- const uint8_t *src2, ptrdiff_t stride)
-{
- double ftmp[5];
- mips_reg tmp[1];
- DECLARE_VAR_ALL64;
-
- __asm__ volatile (
- "li %[tmp0], 0x08 \n\t"
- "pxor %[ftmp4], %[ftmp4], %[ftmp4] \n\t"
- "1: \n\t"
- MMI_LDC1(%[ftmp0], %[src1], 0x00)
- "por %[ftmp1], %[ftmp0], %[ftmp0] \n\t"
- MMI_LDC1(%[ftmp2], %[src2], 0x00)
- "por %[ftmp3], %[ftmp2], %[ftmp2] \n\t"
- "punpcklbh %[ftmp0], %[ftmp0], %[ftmp4] \n\t"
- "punpckhbh %[ftmp1], %[ftmp1], %[ftmp4] \n\t"
- "punpcklbh %[ftmp2], %[ftmp2], %[ftmp4] \n\t"
- "punpckhbh %[ftmp3], %[ftmp3], %[ftmp4] \n\t"
- "psubh %[ftmp0], %[ftmp0], %[ftmp2] \n\t"
- "psubh %[ftmp1], %[ftmp1], %[ftmp3] \n\t"
- MMI_SDC1(%[ftmp0], %[block], 0x00)
- MMI_SDC1(%[ftmp1], %[block], 0x08)
- PTR_ADDI "%[tmp0], %[tmp0], -0x01 \n\t"
- PTR_ADDIU "%[block], %[block], 0x10 \n\t"
- PTR_ADDU "%[src1], %[src1], %[stride] \n\t"
- PTR_ADDU "%[src2], %[src2], %[stride] \n\t"
- "bgtz %[tmp0], 1b \n\t"
- : [ftmp0]"=&f"(ftmp[0]), [ftmp1]"=&f"(ftmp[1]),
- [ftmp2]"=&f"(ftmp[2]), [ftmp3]"=&f"(ftmp[3]),
- [ftmp4]"=&f"(ftmp[4]),
- [tmp0]"=&r"(tmp[0]),
- RESTRICT_ASM_ALL64
- [block]"+&r"(block), [src1]"+&r"(src1),
- [src2]"+&r"(src2)
- : [stride]"r"((mips_reg)stride)
- : "memory"
- );
-}
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Air Attack v4.52 MOD APK The Ultimate Arcade Game with Unlimited Life.md b/spaces/congsaPfin/Manga-OCR/logs/Air Attack v4.52 MOD APK The Ultimate Arcade Game with Unlimited Life.md
deleted file mode 100644
index a5bb19706d3d44eb49742b046ecd01c78b080f62..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Air Attack v4.52 MOD APK The Ultimate Arcade Game with Unlimited Life.md
+++ /dev/null
@@ -1,110 +0,0 @@
-
-
Air Attack Mod APK Unlimited Life: A Review
-
If you are looking for a thrilling and action-packed arcade game, you might want to check out Air Attack Mod APK Unlimited Life. This is a modified version of the original Air Attack game, which gives you unlimited gold coins, unlimited lives, and no ads. In this article, we will review this modded game and tell you how to download and install it on your Android device.
-
What is Air Attack Mod APK?
-
Air Attack is a classic arcade game that lets you fly a fighter plane and shoot down enemy aircrafts, tanks, ships, and buildings. You can choose from different planes, weapons, and upgrades to customize your gameplay. You can also play in different modes, such as campaign, survival, and multiplayer.
Air Attack Mod APK is a hacked version of the original game that gives you some extra benefits. With this modded game, you can enjoy unlimited gold coins, unlimited lives, and no ads. This means you can buy anything you want in the game store, play as long as you want without losing lives, and enjoy a smooth and uninterrupted gaming experience.
-
Features of Air Attack Mod APK
-
Unlimited gold coins
-
Gold coins are the main currency in Air Attack. You can use them to buy new planes, weapons, upgrades, and power-ups. Normally, you have to earn gold coins by playing the game or watching ads. But with Air Attack Mod APK Unlimited Life, you can get unlimited gold coins for free. You can spend them as much as you want without worrying about running out.
-
Unlimited lives
-
Lives are the number of times you can play the game before you have to start over. Normally, you have to be careful not to get hit by enemy fire or crash into obstacles. If you lose all your lives, you have to wait for them to regenerate or buy more with gold coins. But with Air Attack Mod APK Unlimited Life, you can get unlimited lives for free. You can play as long as you want without losing lives or waiting for them to refill.
-
No ads
-
Ads are the annoying pop-ups that interrupt your gameplay and waste your time. Normally, you have to watch ads to earn gold coins or get extra lives. But with Air Attack Mod APK Unlimited Life, you can get rid of all the ads for free. You can enjoy a smooth and uninterrupted gaming experience without any distractions.
-
How to download and install Air Attack Mod APK?
-
If you want to download and install Air Attack Mod APK Unlimited Life on your Android device, you have to follow these simple steps:
-
air attack mod apk unlimited life and money
-air attack mod apk unlimited life and coins
-air attack mod apk unlimited life and ammo
-air attack mod apk unlimited life and bombs
-air attack mod apk unlimited life and fuel
-air attack mod apk unlimited life and stars
-air attack mod apk unlimited life and missiles
-air attack mod apk unlimited life and health
-air attack mod apk unlimited life and gems
-air attack mod apk unlimited life and diamonds
-air attack mod apk unlimited life and weapons
-air attack mod apk unlimited life and planes
-air attack mod apk unlimited life and upgrades
-air attack mod apk unlimited life and levels
-air attack mod apk unlimited life and stages
-air attack mod apk unlimited life and missions
-air attack mod apk unlimited life and achievements
-air attack mod apk unlimited life and medals
-air attack mod apk unlimited life and rewards
-air attack mod apk unlimited life and bonuses
-air attack mod apk unlimited life and skins
-air attack mod apk unlimited life and modes
-air attack mod apk unlimited life and features
-air attack mod apk unlimited life and cheats
-air attack mod apk unlimited life and hacks
-air attack mod apk unlimited life download free
-air attack mod apk unlimited life download latest version
-air attack mod apk unlimited life download for android
-air attack mod apk unlimited life download 2023
-air attack mod apk unlimited life download link
-air attack mod apk unlimited life download offline
-air attack mod apk unlimited life download online
-air attack mod apk unlimited life download no root
-air attack mod apk unlimited life download no ads
-air attack mod apk unlimited life download no virus
-air attack mod apk unlimited life download safe
-air attack mod apk unlimited life download secure
-air attack mod apk unlimited life download fast
-air attack mod apk unlimited life download easy
-air attack mod apk unlimited life download high quality
-how to get air attack mod apk unlimited life
-how to install air attack mod apk unlimited life
-how to play air attack mod apk unlimited life
-how to use air attack mod apk unlimited life
-how to update air attack mod apk unlimited life
-how to uninstall air attack mod apk unlimited life
-how to hack air attack mod apk unlimited life
-how to cheat in air attack mod apk unlimited life
-how to enjoy air attack mod apk unlimited life
-
Step 1: Download the APK file
-
The first step is to download the APK file of Air Attack Mod APK Unlimited Life from a reliable source. You can use this link to download the file directly to your device.
-
Step 2: Enable unknown sources
-
The next step is to enable unknown sources on your device. This will allow you to install apps that are not from the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Step 3: Install the APK file
-
The third step is to install the APK file on your device. To do this, locate the downloaded file in your file manager and tap on it
A pop-up window will appear asking you to confirm the installation. Tap on Install and wait for the process to finish.
-
Step 4: Launch the game and enjoy
-
The final step is to launch the game and enjoy. To do this, go to your app drawer and tap on the Air Attack icon. You can now play the game with unlimited gold coins, unlimited lives, and no ads.
-
Pros and cons of Air Attack Mod APK
-
Like any other modded game, Air Attack Mod APK Unlimited Life has its pros and cons. Here are some of them:
-
Pros
-
Fun and addictive gameplay
-
Air Attack Mod APK Unlimited Life offers a fun and addictive gameplay that will keep you hooked for hours. You can fly a fighter plane and shoot down enemy aircrafts, tanks, ships, and buildings. You can also play in different modes, such as campaign, survival, and multiplayer. You can challenge yourself with different levels of difficulty and missions.
-
Stunning graphics and sound effects
-
Air Attack Mod APK Unlimited Life has stunning graphics and sound effects that will make you feel like you are in a real war zone. You can enjoy the realistic 3D environments, animations, and explosions. You can also hear the roaring of the engines, the firing of the guns, and the screams of the enemies.
-
Easy to control and customize
-
Air Attack Mod APK Unlimited Life is easy to control and customize. You can use the touch screen or the accelerometer to control your plane. You can also adjust the sensitivity and the sound settings according to your preference. You can also choose from different planes, weapons, upgrades, and power-ups to customize your gameplay.
-
Cons
-
Requires internet connection
-
Air Attack Mod APK Unlimited Life requires an internet connection to play. This means you cannot play it offline or in areas with poor network coverage. This can be a problem if you want to play the game without any interruptions or data charges.
-
May not be compatible with some devices
-
Air Attack Mod APK Unlimited Life may not be compatible with some devices. This means you may encounter some errors or glitches while playing the game on your device. This can be a problem if you want to enjoy the game without any issues or crashes.
-
Conclusion
-
Air Attack Mod APK Unlimited Life is a modified version of the original Air Attack game that gives you unlimited gold coins, unlimited lives, and no ads. It is a thrilling and action-packed arcade game that lets you fly a fighter plane and shoot down enemy aircrafts, tanks, ships, and buildings. It has fun and addictive gameplay, stunning graphics and sound effects, and easy to control and customize features. However, it also requires an internet connection to play and may not be compatible with some devices. If you want to try this modded game, you can download it from this link and follow the steps we have provided above.
-
FAQs
-
Here are some frequently asked questions about Air Attack Mod APK Unlimited Life:
-
-
Is Air Attack Mod APK Unlimited Life safe to download?
-
Yes, Air Attack Mod APK Unlimited Life is safe to download as long as you use a reliable source like this link. However, you should always be careful when downloading any modded game from unknown sources as they may contain viruses or malware that can harm your device.
-
Is Air Attack Mod APK Unlimited Life legal to use?
-
No, Air Attack Mod APK Unlimited Life is not legal to use as it violates the terms and conditions of the original game developer. By using this modded game, you are breaking the rules and risking your account being banned or suspended. Therefore, we do not recommend using this modded game for any purposes.
-
Can I play Air Attack Mod APK Unlimited Life with my friends?
-
Yes, you can play Air Attack Mod APK Unlimited Life with your friends online. You can join or create a room in multiplayer mode and invite your friends to join you. You can also chat with them in real-time and compete with them in different missions.
-
Can I update Air Attack Mod APK Unlimited Life?
-
No, you cannot update Air Attack Mod APK Unlimited Life as it is a modded version of the original game. If you try to update it from the Google Play Store or any other source, you will lose all the modded features and revert back to the original version. Therefore, we advise you not to update this modded game unless there is a new version of the modded game available from the same source.
-
What are some alternatives to Air Attack Mod APK Unlimited Life?
-
If you are looking for some alternatives to Air Attack Mod APK Unlimited Life, you can try these games:
-
-
Name
Description
-
Sky Force Reloaded
A classic arcade shooter game that lets you fly a fighter plane and blast your enemies with various weapons and power-ups. You can also upgrade your plane and collect cards and achievements.
-
Galaxy Attack: Alien Shooter
A space shooter game that lets you fly a spaceship and defend the Earth from alien invaders. You can also upgrade your spaceship and weapons and play in different modes and levels.
-
1945 Air Force
A retro-style arcade shooter game that lets you fly a warplane and fight against the Axis forces in World War II. You can also collect and upgrade different planes and weapons and play in different modes and missions.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Cars Games and Enjoy the Variety of Tracks and Vehicles in GT Racing 2.md b/spaces/congsaPfin/Manga-OCR/logs/Download Cars Games and Enjoy the Variety of Tracks and Vehicles in GT Racing 2.md
deleted file mode 100644
index 98cfd417853cb27125cba2f7751acc5759b68f3a..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download Cars Games and Enjoy the Variety of Tracks and Vehicles in GT Racing 2.md
+++ /dev/null
@@ -1,181 +0,0 @@
-
-
Download Cars Games: How to Find and Play the Best Racing Games on Your PC or Mobile Device
-
Do you love speed, adrenaline, and competition? Do you enjoy driving fast cars, trucks, motorcycles, or even futuristic vehicles? If you answered yes, then you might be interested in cars games. Cars games are video games that involve racing against other players or the computer on various tracks, roads, or terrains. Cars games are one of the most popular genres of video games, as they appeal to a wide range of audiences, from casual gamers to hardcore enthusiasts. Whether you want to experience realistic driving physics, colorful graphics, or fun gameplay mechanics, there is a cars game for you.
-
In this article, we will guide you through the different types of cars games, the platforms you can play them on, how to download them, how to play them, and what are some of the best cars games to try out. By the end of this article, you will be ready to rev up your engines and hit the asphalt.
Cars games can be classified into three main subgenres: sim racing, kart racing, and futuristic racing. Each subgenre has its own characteristics, advantages, and disadvantages. Let's take a look at each one.
-
Sim Racing
-
Sim racing stands for simulation racing. This subgenre aims to provide a realistic representation of driving and racing. Sim racing games feature licensed cars and motorbikes from real manufacturers, authentic tracks and locations from around the world, accurate physics and handling models, and detailed graphics and sound effects. Sim racing games are ideal for players who want to immerse themselves in the thrill of racing and learn how to control different vehicles in various conditions. Some examples of sim racing games are F1 23, Gran Turismo Sport, and Forza Motorsport 7.
-
Kart Racing
-
Kart racing is a subgenre that focuses on arcade-style racing with power-ups and weapons. Kart racing games feature cartoonish graphics, exaggerated physics, and humorous elements. Kart racing games are suitable for players who want to have fun and enjoy casual gameplay with friends or family. Some examples of kart racing games are Mario Kart 8 Deluxe, Crash Team Racing Nitro-Fueled, and Sonic & All-Stars Racing Transformed.
-
Futuristic Racing
-
Futuristic racing is a subgenre that involves racing in sci-fi settings with advanced vehicles. Futuristic racing games feature high-speed action, stunning visuals, and innovative gameplay mechanics. Futuristic racing games are perfect for players who want to explore new worlds and experience exhilarating sensations. Some examples of futuristic racing games are Wipeout Omega Collection, Redout, and F-Zero GX.
-
Platforms for Cars Games
-
Cars games can be played on various platforms, such as PC (personal computer), mobile (smartphone or tablet), console (PlayStation, Xbox, Nintendo), or VR (virtual reality). Each platform has its own advantages and disadvantages when it comes to playing cars games. Let's compare two of the most common platforms: PC and mobile.
-
PC
-
PC is a platform that allows you to play cars games on your computer, either with a keyboard, mouse, or controller. PC has some advantages over other platforms, such as:
-
-
Higher performance: PC can run cars games at higher resolutions, frame rates, and graphics settings, resulting in smoother and sharper gameplay.
-
More customization: PC can let you adjust various options and settings to suit your preferences and needs, such as controls, audio, video, and difficulty.
-
More variety: PC can offer you access to a wider range of cars games, from indie titles to AAA games, from old classics to new releases.
-
-
However, PC also has some disadvantages, such as:
-
-
Higher cost: PC can require you to invest more money in buying or upgrading your hardware and software components, such as CPU, GPU, RAM, storage, operating system, drivers, and antivirus.
-
More complexity: PC can involve more steps and challenges in installing, launching, and running cars games, such as compatibility issues, bugs, crashes, and errors.
-
Less portability: PC can limit you to playing cars games in a fixed location, such as your home or office, unless you have a laptop or a gaming laptop.
-
-
Mobile
-
Mobile is a platform that enables you to play cars games on your smartphone or tablet, either with touch screen or tilt controls. Mobile has some advantages over other platforms, such as:
-
download asphalt 8 car racing game
-download gt racing 2 real car game
-download real moto 2 motorcycle game
-download carx drift racing game
-download drift legends real car racing game
-download racing games for pc from epic games store
-download racing games for android from google play
-download free online racing games for pc
-download multiplayer racing games for pc
-download racing simulators for pc
-download kart racing games for pc
-download futuristic racing games for pc
-download track racing games for pc
-download street racing games for pc
-download off-road racing games for pc
-download licensed cars and motorbikes games for pc
-download action-packed races games for pc
-download 75+ tracks racing games for pc
-download single and multiplayer racing modes games for pc
-download different seasons and live events racing games for pc
-download limited-time cups and prizes racing games for pc
-download world series and racing events games for pc
-download customizable racer avatar games for pc
-download airborne and stunt racing games for pc
-download control customization racing games for pc
-download retro racers 2 game for pc
-download f1 23 game for pc
-download madcar f1 multiplayer game for pc
-download cyber drift game for pc
-download moto game for pc
-download fpvsim fpv simulator game for pc
-download tuk tuk race game for pc
-download futuregrind game for pc
-download lego 2k drive game for pc
-download flyto game for pc
-download bus driver simulator game for pc
-download gpro classic racing manager game for pc
-download duck life 8 adventure game for pc
-download forklift extreme deluxe edition game for pc
-download wreckfest game for pc
-download mashed game for pc
-download nhra championship drag racing speed for all game for pc
-download space haste 2 game for pc
-download need for speed unbound standard edition game for pc
-download dakar desert rally game for pc
-download the crew standard edition game for pc
-download wrc generations game for pc
-download funtasia game for pc
-download midnight legends game for pc
-
-
Lower cost: Mobile can allow you to play cars games for free or for a low price, as most of them are available on app stores or websites.
-
Less complexity: Mobile can make it easier for you to download, install, and play cars games, as most of them are designed to be user-friendly and compatible with your device.
-
More portability: Mobile can let you play cars games anywhere and anytime, as long as you have your device and an internet connection.
-
-
However, mobile also has some disadvantages, such as:
-
-
Lower performance: Mobile can run cars games at lower resolutions, frame rates, and graphics settings, resulting in less smooth and less detailed gameplay.
-
Less customization: Mobile can offer you fewer options and settings to modify your gaming experience, such as controls, audio, video, and difficulty.
-
Less variety: Mobile can provide you with a smaller selection of cars games, as most of them are casual or simplified versions of PC or console games.
-
-
How to Download Cars Games
-
Depending on the platform you choose to play cars games on, there are different ways to download them. Here are the steps for downloading cars games on PC and mobile.
-
PC
-
To download cars games on PC, you have two main options: online stores or websites. Online stores are platforms that sell digital copies of cars games that you can buy and download directly to your computer. Some examples of online stores are Steam, Epic Games Store, and GOG.com. Websites are platforms that offer free or paid downloads of cars games that you can get from various sources. Some examples of websites are GameTop, My Real Games, and Softonic. Here are the steps for downloading cars games from online stores or websites:
-
-
Browse the online store or website of your choice and look for the cars game you want to download.
-
Check the system requirements and the price of the cars game before downloading it.
-
Create an account or log in to the online store or website if needed.
-
Add the cars game to your cart or library and proceed to checkout or payment if required.
-
Download the cars game installer or launcher to your computer and run it.
-
Follow the instructions on the screen to install or launch the cars game on your computer.
-
Enjoy playing the cars game on your PC.
-
-
Mobile
-
To download cars games on mobile, you have two main options: app stores or websites. App stores are platforms that sell or distribute apps of cars games that you can download directly to your smartphone or tablet. Some examples of app stores are Google Play Store, Apple App Store, and Amazon Appstore. Websites are platforms that offer free or paid downloads of apps or APK files of cars games that you can get from various sources. Some examples of websites are APKPure, APKMirror, and Uptodown. Here are the steps for downloading cars games from app stores or websites:
-
-
Browse the app store or website of your choice and look for the cars game you want to download.
-
Check the ratings, reviews, and permissions of the cars game before downloading it.
-
Tap on the download or install button to start downloading the cars game to your device.
-
Wait for the download to finish and open the cars game app or APK file on your device.
-
Follow the instructions on the screen to install or launch the cars game on your device.
-
Enjoy playing the cars game on your mobile.
-
-
How to Play Cars Games
-
Once you have downloaded and installed the cars game of your choice, you can start playing it. However, playing cars games can be challenging or frustrating if you don't know how to control your vehicle or how to win races. Here are some tips and tricks for playing cars games on PC and mobile.
-
PC
-
To play cars games on PC, you can use a keyboard, mouse, or controller. Each input device has its own advantages and disadvantages, so you should choose the one that suits your style and comfort. Here are some tips and tricks for playing cars games with a keyboard, mouse, or controller:
-
-
Keyboard: A keyboard is a common input device that allows you to use different keys to steer, accelerate, brake, and use other functions in cars games. A keyboard is easy to use and accessible, but it can be less precise and responsive than a mouse or controller. To play cars games with a keyboard, you should learn the default key bindings or customize them to your liking. You should also practice using the arrow keys or WASD keys to control your vehicle smoothly and accurately.
-
Mouse: A mouse is an input device that allows you to use a cursor and buttons to interact with cars games. A mouse is more precise and responsive than a keyboard, but it can be less comfortable and intuitive than a controller. To play cars games with a mouse, you should adjust the sensitivity and acceleration settings to your preference. You should also practice using the left and right buttons to steer, accelerate, brake, and use other functions in cars games.
-
Controller: A controller is an input device that allows you to use analog sticks, triggers, buttons, and other features to play cars games. A controller is more comfortable and intuitive than a keyboard or mouse, but it can be more expensive and require additional software or drivers. To play cars games with a controller, you should connect it to your PC via USB or Bluetooth. You should also learn the default button mappings or customize them to your liking. You should also practice using the analog sticks, triggers, buttons, and other features to control your vehicle smoothly and accurately.
-
-
Mobile
-
To play cars games on mobile, you can use touch screen or tilt controls. Touch screen controls allow you to tap, swipe, drag, and pinch on your device's screen to play cars games. Tilt controls allow you to tilt your device left or right to steer your vehicle in cars games. Both types of controls have their pros and cons, so you should choose the one that suits your style and comfort. Here are some tips and tricks for playing cars games with touch screen or tilt controls:
-
-
Touch screen: Touch screen controls are easy to use and accessible, but they can block your view of the game or cause accidental inputs. To play cars games with touch screen controls, you should adjust the size and position of the buttons or icons on your screen. You should also practice tapping, swiping, dragging, and pinching on your screen to steer, accelerate, brake, and use other functions in cars games.
-
Tilt: Tilt controls are more immersive and realistic than touch screen controls, but they can be less precise and stable than touch screen controls. To play cars games with tilt controls, you should calibrate your device's accelerometer before starting the game. You should also practice tilting your device left or right to steer your vehicle in cars games.
-
-
Best Cars Games to Download and Play
-
Now that you know how to download and play cars games on PC and mobile, you might be wondering what are some of the best cars games to try out. There are hundreds of cars games available on different platforms, but not all of them are worth your time and money. To help you find the best cars games for PC and mobile, we have compiled a table with some of the most popular and popular cars games for PC and mobile. The table includes the names, ratings, features, and links of the games. | Name | Rating | Features | Links | | --- | --- | --- | --- | | Forza Horizon 5 | 9.1/10 | - Open-world racing in Mexico with dynamic seasons and weather - Hundreds of cars to customize and drive - Various modes and events to participate in solo or online - Stunning graphics and sound effects | [PC](^1^), [Xbox](https://www.xbox.com/en-US/games/forza-horizon-5) | | Dirt 5 | 7.9/10 | - Off-road racing on various terrains and locations - Over 70 vehicles to choose from - Career mode with voice acting and story - Online and split-screen multiplayer modes - Playgrounds mode to create and share custom tracks | [PC](^2^), [PS4](https://store.playstation.com/en-us/product/UP4001-CUSA16194_00-DIRT5FULLGAME000), [PS5](https://store.playstation.com/en-us/product/UP4001-PPSA01521_00-DIRT5FULLGAME000), [Xbox One](https://www.microsoft.com/en-us/p/dirt-5/9n0wzv3qzq0c?activetab=pivot:overviewtab), [Xbox Series X/S](https://www.microsoft.com/en-us/p/dirt-5-xbox-series-xs/9n0wzv3qzq0c?activetab=pivot:overviewtab) | | F1 2022 | 8.6/10 | - Official Formula One racing game with licensed teams, drivers, and circuits - Realistic simulation of driving and racing physics - Career mode with story and customization - Online mode with ranked and unranked races - Braking Point mode to experience a narrative-driven story | [PC](^3^), [PS4](https://store.playstation.com/en-us/product/UP4001-CUSA26732_00-F120210000000000), [PS5](https://store.playstation.com/en-us/product/UP4001-PPSA02947_00-F120210000000000), [Xbox One](https://www.microsoft.com/en-us/p/f1-2021-xbox-one/9p6xhjgkxkxh?activetab=pivot:overviewtab), [Xbox Series X/S](https://www.microsoft.com/en-us/p/f1-2021-xbox-series-xs/9p6xhjgkxkxh?activetab=pivot:overviewtab) | | NFS Heat | 7.2/10 | - Street racing in a fictional open-world city - Day and night cycle with different modes and rewards - Over 120 cars to customize and upgrade - Online mode with up to 16 players - Cop chases and pursuits | [PC], [PS4](https://store.playstation.com/en-us/product/UP0006-CUSA15090_00-NFS2000MASTER000), [Xbox One](https://www.microsoft.com/en-us/p/need-for-speed-heat/9p8q2k21b6vg?activetab=pivot:overviewtab) | | Project CARS 3 | 6.8/10 | - Racing game with over 200 cars and 140 tracks - Career mode with progression and customization - Dynamic weather and time of day effects - Online mode with multiplayer races and challenges - VR support for PC | [PC], [PS4](https://store.playstation.com/en-us/product/UP0700-CUSA19665_00-PJC3BASEGAMEUS00), [Xbox One](https://www.microsoft.com/en-us/p/project-cars-3/9n7l8fjv7l8s?activetab=pivot:overviewtab) | | Dirt Rally 2.0 | 8.4/10 | - Rally racing game with realistic physics and handling - Over 50 cars and 100 stages across six locations - Career mode with team management and upgrades - Online mode with daily, weekly, and monthly challenges - VR support for PC | [PC], [PS4](https://store.playstation.com/en-us/product/UP4001-CUSA12819_00-DIRTRALLY2US0000), [Xbox One](https://www.microsoft.com/en-us/p/dirt-rally-20/c2wqnrj46mrv?activetab=pivot:overviewtab) | | Assetto Corsa Competizione | 7.7/10 | - Official GT World Challenge racing game with licensed cars, teams, and tracks - Advanced simulation of driving and racing mechanics - Dynamic weather and day-night cycle - Single-player and multiplayer modes - VR and triple screen support for PC | [PC], [PS4](https://store.playstation.com/en-us/product/UP4040-CUSA17346_00-ACCOMPETIZIONE00), [Xbox One](https://www.microsoft.com/en-us/p/assetto-corsa-competizione/9n5xqzg0q2xv?activetab=pivot:overviewtab) | | Asphalt 9: Legends | 4.6/5 | - Arcade-style racing game with over 60 cars and 80 tracks - Career mode with hundreds of events and challenges - Online mode with multiplayer races and clubs - Customization and upgrade system - Touch drive or manual controls | [Android], [iOS](https://apps.apple.com/us/app/asphalt-9-legends/id805603214), [Windows](https://www.microsoft.com/en-us/p/asphalt-9-legends/9nzqpt0mwtd0?activetab=pivot:overviewtab) | | Real Racing 3 | 4.4/5 | - Realistic racing game with over 250 cars and 40 tracks - Career mode with thousands of events and cups - Online mode with real-time multiplayer races and leaderboards - Time-shifted multiplayer mode to race against friends or strangers - Customization and upgrade system | [Android], [iOS](https://apps.apple.com/us/app/real-racing-3/id556164008) | | CSR Racing 2 | 4.6/5 | - Drag racing game with over 200 cars and various locations - Career mode with story and crew battles - Online mode with live races and events - Customization and tuning system - AR mode to view cars in real life | [Android], [iOS](https://apps.apple.com/us/app/csr-racing-2/id887947640) | | Hill Climb Racing 2 | 4.3/5 | - Physics-based racing game with various vehicles and terrains - Adventure mode with endless levels and challenges - Online mode with cups and leagues - Customization and upgrade system - Funny graphics and sound effects | [Android], [iOS](https://apps.apple.com/us/app/hill-climb-racing-2/id1146465836) | | Traffic Rider | 4.4/5 | - Motorcycle racing game with over 30 bikes and various roads - Career mode with over 70 missions and achievements - Endless mode with different modes and objectives - First-person view and realistic sound effects - Day-night cycle and weather effects | [Android], [iOS](https://apps.apple.com/us/app/traffic-rider/id951744068) |
Conclusion
-
Cars games are video games that involve racing against other players or the computer on various tracks, roads, or terrains. Cars games are one of the most popular genres of video games, as they appeal to a wide range of audiences, from casual gamers to hardcore enthusiasts. Whether you want to experience realistic driving physics, colorful graphics, or fun gameplay mechanics, there is a cars game for you.
-
In this article, we have shown you the different types of cars games, the platforms you can play them on, how to download them, how to play them, and what are some of the best cars games to try out. We hope that this article has helped you find and play the best racing games on your PC or mobile device.
-
If you are looking for a way to download cars games, you can use the links provided in the table above. If you are looking for a way to play cars games, you can use the tips and tricks provided in this article. If you are looking for a way to have fun and enjoy cars games, you can start playing them right now.
-
So what are you waiting for? Download cars games today and start your engines!
-
FAQs
-
Here are some of the frequently asked questions about cars games:
-
-
What are the benefits of playing cars games?
-
Playing cars games can have various benefits, such as:
-
-
Improving your hand-eye coordination, reaction time, and spatial awareness.
-
Enhancing your creativity, problem-solving, and decision-making skills.
-
Reducing your stress, boredom, and anxiety levels.
-
Increasing your enjoyment, satisfaction, and confidence.
-
-
What are the drawbacks of playing cars games?
-
Playing cars games can also have some drawbacks, such as:
-
-
Spending too much time or money on cars games, which can affect your health, productivity, and finances.
-
Exposing yourself to violent, inappropriate, or addictive content, which can affect your mood, behavior, and values.
-
Experiencing technical issues, such as lag, glitches, or errors, which can affect your gaming experience and performance.
-
Facing online risks, such as cyberbullying, hacking, or phishing, which can affect your privacy and security.
-
-
How to choose the best cars game for me?
-
To choose the best cars game for you, you should consider the following factors:
-
-
Your preference: You should choose a cars game that matches your taste and interest, such as the subgenre, the theme, the style, and the features.
-
Your platform: You should choose a cars game that is compatible with your device and input method, such as the system requirements, the controls, and the graphics.
-
Your budget: You should choose a cars game that fits your budget and expectations, such as the price, the quality, and the value.
-
Your feedback: You should choose a cars game that has positive feedback and reviews from other players and critics, such as the ratings, the comments, and the awards.
-
-
How to improve my skills in cars games?
-
To improve your skills in cars games, you should follow these tips:
-
-
Practice: You should play cars games regularly and frequently to improve your muscle memory, reflexes, and strategies.
-
Learn: You should watch tutorials, guides, and videos from experts and professionals to learn new techniques, tips, and tricks.
-
Challenge: You should try different modes, levels, and opponents to challenge yourself and test your skills.
-
Analyze: You should review your performance and mistakes to identify your strengths and weaknesses.
-
Enjoy: You should have fun and enjoy playing cars games without stressing too much about winning or losing.
-
-
How to find more information about cars games?
-
To find more information about cars games, you can use these resources:
-
-
Websites: You can visit websites that specialize in cars games or video games in general, such as IGN, GameSpot, or PC Gamer.
-
Blogs: You can read blogs that cover cars games or video games in general, such as Kotaku, Polygon, or Rock Paper Shotgun.
-
Podcasts: You can listen to podcasts that discuss cars games or video games in general, such as The Giant Bombcast, The Game Informer Show, or The PC Gaming Show.
-
Forums: You can join forums that are dedicated to cars games or video games in general, such as Reddit, Steam, or Discord.
-
Social media: You can follow social media accounts that share news and updates about cars games or video games in general, such as Twitter, Facebook, or Instagram.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Lep 39s World Download For Pc UPDATED.md b/spaces/congsaPfin/Manga-OCR/logs/Lep 39s World Download For Pc UPDATED.md
deleted file mode 100644
index 8c797dc85158fdecaaee1ed9ea08b2453fa86338..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Lep 39s World Download For Pc UPDATED.md
+++ /dev/null
@@ -1,44 +0,0 @@
-
-
How to Download and Play Lep's World on Your PC
-
If you are a fan of classic platform games like Super Mario, you might want to try Lep's World, a popular game that has over 250 million downloads. Lep's World is a fun and challenging game that follows the adventures of Lep, a leprechaun who has to find his gold and rescue his friends from an evil wizard. In this article, we will show you how to download and play Lep's World on your PC, so you can enjoy this game on a bigger screen and with better controls.
Lep's World is a platform game that is inspired by the legendary Super Mario series. The game has a similar gameplay, where you have to run, jump, collect coins, avoid enemies, and reach the end of each level. The game also has some differences, such as the ability to throw acorns at enemies, collect clover leaves to increase your health, and use different items and abilities to overcome obstacles.
-
Features and gameplay
-
Lep's World has many features that make it an enjoyable and addictive game. Some of these features are:
-
-
160 well-designed levels across 8 different worlds
-
8 amazing characters to choose from, each with their own skills and costumes
-
9 challenging enemies and boss fights
-
Beautiful graphics and animations
-
Catchy music and sound effects
-
Achievements and leaderboards
-
Multiplayer mode
-
Frequent updates with new content
-
-
Why play Lep's World on PC?
-
Bigger screen and better graphics
-
One of the main reasons to play Lep's World on PC is that you can enjoy the game on a bigger screen and with better graphics. The game has colorful and detailed graphics that look great on a PC monitor. You can also adjust the resolution and quality settings to suit your preferences.
-
Easier controls and smoother performance
-
Another reason to play Lep's World on PC is that you can use easier controls and experience smoother performance. The game has simple controls that only require four buttons: left, right, jump, and throw. You can use your keyboard or a controller to play the game on your PC. You can also customize the key mapping to your liking. Moreover, playing on PC can reduce lag and glitches that might occur on mobile devices.
-
-
How to download Lep's World on PC?
-
Option 1: Microsoft Store
-
Steps to download from Microsoft Store
-
-
Open the Microsoft Store app on your PC. You can find it by searching for it in the Windows search bar.
-
Click on Gaming in the sidebar.
-
Type in Lep's World in the search box and press Enter.
-
Select Lep's World from the results and click on Get or Buy, depending on whether the game is free or paid.
-
Wait for the game to download and install on your PC.
-
Launch the game from the Microsoft Store app or from your Start menu.
-
-
Option 2: Direct download from official website
-
Steps to download from official website
-
-
Go to [Lep's World official website](^1^) using your web browser.
-
Click on Download for Windows button.
-
Save I have already written the article based on the outline and the topic provided. There is nothing more to add to the article. If you are satisfied with the article, you can copy and paste it to your desired destination. If you have any suggestions or feedback, please let me know. I hope you enjoyed reading and writing this article with me. Thank you for using Microsoft Bing search chat mode. 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Pokmon GO APK How to Install and Play the Ultimate Augmented Reality Game.md b/spaces/congsaPfin/Manga-OCR/logs/Pokmon GO APK How to Install and Play the Ultimate Augmented Reality Game.md
deleted file mode 100644
index db0dea1ac5cbc2cd1f1094393a531749eddacf0d..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Pokmon GO APK How to Install and Play the Ultimate Augmented Reality Game.md
+++ /dev/null
@@ -1,171 +0,0 @@
-
-
Pokemon Go APK: Everything You Need to Know
-
Pokemon Go is one of the most popular mobile games in the world, with over a billion downloads and millions of active players. It is an augmented reality game that lets you explore the real world and catch virtual creatures called Pokemon. You can also battle other players, team up with friends, trade Pokemon, and more.
If you are an Android user, you might be wondering how to download and install Pokemon Go on your device. One way to do that is by using an apk file, which is a package file that contains all the necessary data for an app. In this article, we will show you how to get the Pokemon Go apk file, what features it offers, some tips and tricks for playing the game, and how to keep it updated and compatible with your device.
-
What is an APK File and How to Install It?
-
An apk file is a compressed file that contains all the code, resources, assets, certificates, and manifest of an Android app. It is similar to an exe file for Windows or a dmg file for Mac. You can use an apk file to install an app on your Android device without using the Google Play Store.
-
To install an apk file, you need to enable the option to allow installation from unknown sources in your device settings. This will let you install apps from sources other than the Google Play Store. However, you should be careful when downloading apk files from third-party websites, as they might contain malware or viruses that can harm your device or steal your data.
-
pokemon go apk download
-pokemon go apk mod
-pokemon go apk latest version
-pokemon go apk mirror
-pokemon go apk android
-pokemon go apk update
-pokemon go apk hack
-pokemon go apk 2023
-pokemon go apk for pc
-pokemon go apk no root
-pokemon go apk spoofing
-pokemon go apk joystick
-pokemon go apk adventure sync
-pokemon go apk offline
-pokemon go apk old version
-pokemon go apk xapk
-pokemon go apk for ios
-pokemon go apk free download
-pokemon go apk with arcore
-pokemon go apk for samsung
-pokemon go apk obb
-pokemon go apk uptodown
-pokemon go apk 0.273.3
-pokemon go apk for fire tablet
-pokemon go apk reddit
-pokemon go apk file
-pokemon go apk size
-pokemon go apk not compatible
-pokemon go apk pure
-pokemon go apk bluestacks
-pokemon go apk without google play services
-pokemon go apk for huawei
-pokemon go apk 0.273.2
-pokemon go apk for kindle fire
-pokemon go apk apkpure
-pokemon go apk 0.273.1
-pokemon go apk for android tv
-pokemon go apk cracked
-pokemon go apk without arcore
-pokemon go apk for chromebook
-pokemon go apk 0.273.0
-pokemon go apk for emulator
-pokemon go apk beta
-pokemon go apk unlimited coins and balls 2023 download free full version cracked game modded hack cheats no root needed offline installer latest update new release date android ios iphone ipad tablet mobile phone device app application software program tool generator online website blog forum site link url how to install guide tutorial video youtube facebook twitter instagram tiktok snapchat pinterest quora reddit telegram discord whatsapp messenger text message email gmail outlook hotmail yahoo mail aol mail icloud mail protonmail mail.com zoho mail yandex mail gmx mail tutanota mailfence runbox fastmail hushmail lavabit startmail posteo mailbox.org torguard countermail kolab now mailbox.org luxsci neomailbox posteo protonmail runbox scryptmail shazzlemail startmail tutanota unspyable vmail.me zoho mail
-
One of the trusted sources for downloading apk files is APKCombo.com, which offers free and safe downloads of various Android apps and games. You can search for Pokemon Go on their website and download the latest version of the apk file. Then, you can open the file on your device and follow the instructions to install it. You might need to grant some permissions to the app, such as access to your location, camera, storage, etc.
-
Features of Pokemon Go
-
Pokemon Go is a game that combines the fun of catching Pokemon with the thrill of exploring the real world. Here are some of the features that make it so addictive:
-
Catching, Collecting, and Evolving Pokemon
-
The main goal of Pokemon Go is to catch as many different kinds of Pokemon as you can. You can find them by walking around your neighborhood or visiting different places. When you encounter a Pokemon, you can use your smartphone's touch screen to throw a Poke Ball at it and try to catch it. Some Pokemon are easier to catch than others, depending on their rarity, level, type, etc.
-
Once you catch a Pokemon, you can add it to your collection or transfer it to Professor Willow for some candy. You can use candy and stardust to power up your Pokemon and make them stronger. You can also use candy to evolve your Pokemon into new forms, such as evolving Charmander into Charmeleon or Eevee into Vaporeon.
-
You can check your Pokedex to see how many different kinds of Pokemon you have caught and how many more you need to complete it. There are over 600 species of Pokemon available in Pokemon Go, including some regional exclusives that can only be found in certain parts of the world. You can also encounter some special Pokemon, such as shiny, shadow, or costume Pokemon, that have different appearances or abilities.
-
Battling Other Trainers and Raiding Gyms
-
Pokemon Go is not just about catching Pokemon, but also about fighting with them. You can challenge other players to friendly battles or compete in ranked battles to earn rewards and glory. You can use up to three Pokemon in a battle and switch between them as needed. You can also use charged attacks and shields to gain an advantage over your opponent.
-
Another way to battle in Pokemon Go is by raiding gyms. Gyms are locations where you can join forces with other players to defeat a powerful Pokemon called a raid boss. Raid bosses can range from one to five stars in difficulty, and some of them are legendary or mythical Pokemon that are very rare and hard to catch. You can use a raid pass to join a raid, and you can invite up to five friends to join you. If you manage to defeat the raid boss within the time limit, you will get a chance to catch it and earn some rewards, such as rare candy, golden razz berries, or TMs.
-
Making Friends, Exchanging Gifts, and Trading Pokemon
-
Pokemon Go is also a social game that lets you interact with other players around the world. You can add other players as friends by using their trainer codes or scanning their QR codes. You can also join local or global communities of Pokemon Go players through platforms like Discord or Facebook.
-
One of the benefits of having friends in Pokemon Go is that you can exchange gifts with them. Gifts are items that you can get from spinning PokeStops or gyms, and they can contain useful items like Poke Balls, potions, revives, eggs, or stickers. You can send one gift per day to each of your friends, and you can open up to 20 gifts per day from your friends. Sending and opening gifts will increase your friendship level with your friends, which will give you some bonuses, such as extra damage in raids, extra balls in catching, or reduced stardust cost in trading.
-
Trading is another feature that you can enjoy with your friends in Pokemon Go. Trading is the process of exchanging one Pokemon for another with another player. You can trade any Pokemon that you have caught or hatched, except for some special ones like mythical Pokemon or your buddy Pokemon. Trading will cost some stardust, depending on the rarity and distance of the traded Pokemon. Trading will also change the IVs and CP of the traded Pokemon, which might make them better or worse than before. However, trading might also result in a lucky trade, which will guarantee high IVs and reduced stardust cost for powering up the traded Pokemon.
-
Tips and Tricks for Pokemon Go
-
Pokemon Go is a game that requires some strategy and skill to master. Here are some tips and tricks that will help you become a better trainer:
-
How to Find and Catch Rare Pokemon
-
Finding and catching rare Pokemon is one of the most exciting aspects of Pokemon Go. However, it is not always easy to do so, as rare Pokemon tend to spawn less frequently and flee more easily than common ones. Here are some ways to increase your chances of finding and catching rare Pokemon:
-
-
Use incense or lures to attract more Pokemon to your location. Incense will spawn one Pokemon every minute for 30 minutes, while lures will spawn one Pokemon every three minutes for 30 minutes at a specific PokeStop or gym. You can also use special incense or lures that will attract specific types of Pokemon.
-
Use the nearby or sightings feature to track down nearby Pokemon. The nearby feature will show you the Pokemon that are near PokeStops or gyms, while the sightings feature will show you the Pokemon that are in the wild. You can tap on a Pokemon to see its location on the map and follow the footsteps to find it.
-
Use the weather system to find weather-boosted Pokemon. The weather system will change the weather conditions in the game according to the real-world weather in your area. Different types of Pokemon will spawn more frequently and be stronger in different weather conditions. For example, fire-type Pokemon will spawn more often and have higher CP in sunny weather, while water-type Pokemon will spawn more often and have higher CP in rainy weather.
-
Use field research tasks or special research quests to encounter rare Pokemon. Field research tasks are missions that you can get from spinning PokeStops or gyms, and they will reward you with items or encounters with certain Pokemon after completing them. Special research quests are story-based missions that you can get from Professor Willow or other characters, and they will reward you with items or encounters with some rare or legendary Pokemon after completing them.
-
Use the adventure sync feature to hatch eggs and get rare Pokemon. The adventure sync feature will track your steps and distance even when the app is closed, and it will count towards hatching eggs that you can get from spinning PokeStops or gyms. Eggs can contain different kinds of Pokemon, depending on their distance and rarity. For example, 2 km eggs can contain common Pokemon like Pidgey or Rattata, while 10 km eggs can contain rare Pokemon like Dratini or Beldum.
-
-
To catch rare Pokemon, you need to use some strategies and items to increase your catch rate. Here are some tips to catch rare Pokemon:
-
-
Use the right type of Poke Ball for the situation. There are different types of Poke Balls that have different catch rates and effects. For example, a Great Ball has a higher catch rate than a regular Poke Ball, while an Ultra Ball has an even higher catch rate. You can also use special balls like a Premier Ball, which has a higher catch rate in raids, or a Quick Ball, which has a higher catch rate at the start of an encounter.
-
Use curveballs and throw bonuses to improve your accuracy and catch rate. A curveball is when you spin the Poke Ball before throwing it, which will make it curve in the air and hit the Pokemon from the side. A throw bonus is when you hit the Pokemon inside the colored circle that appears around it, which will shrink and change color as you hold the Poke Ball. The smaller and darker the circle, the higher the throw bonus. A curveball and a throw bonus will increase your catch rate and give you extra XP.
-
Use berries to make catching easier. Berries are items that you can feed to a Pokemon before throwing a Poke Ball at it, and they will have different effects on the Pokemon. For example, a Razz Berry will make the Pokemon easier to catch, while a Nanab Berry will make the Pokemon less likely to move or attack. You can also use special berries like a Pinap Berry, which will double the candy you get from catching the Pokemon, or a Golden Razz Berry, which will greatly increase the catch rate.
-
-
How to Use Items and Berries Effectively
-
Items and berries are essential tools that will help you in your Pokemon Go adventure. You can get them from spinning PokeStops or gyms, completing tasks or quests, opening gifts, or buying them from the shop. Here are some tips on how to use items and berries effectively:
-
-
Use potions and revives to heal your Pokemon after battles or raids. Potions are items that restore some HP to your Pokemon, while revives are items that restore some HP and revive your fainted Pokemon. There are different levels of potions and revives that restore different amounts of HP, such as Super Potion, Hyper Potion, Max Potion, Revive, and Max Revive.
-
Use incense or lures to attract more Pokemon to your location. Incense will spawn one Pokemon every minute for 30 minutes, while lures will spawn one Pokemon every three minutes for 30 minutes at a specific PokeStop or gym. You can also use special incense or lures that will attract specific types of Pokemon.
-
Use lucky eggs or star pieces to boost your XP or stardust gain. Lucky eggs are items that double your XP gain for 30 minutes, while star pieces are items that increase your stardust gain by 50% for 30 minutes. You can use them before doing activities that give you a lot of XP or stardust, such as catching Pokemon, hatching eggs, evolving Pokemon, completing tasks or quests, battling or raiding, etc.
-
Use TMs to change your Pokemon's moves. TMs are items that let you change one of your Pokemon's moves to a random one of the same type. There are two types of TMs: fast TMs and charged TMs. Fast TMs will change your Pokemon's fast move, which is the move that you use by tapping on the screen during a battle. Charged TMs will change your Pokemon's charged move, which is the move that you use by holding on the screen during a battle after filling up the energy bar.
-
Use rare candy to power up or evolve any Pokemon. Rare candy is an item that can be converted into candy for any Pokemon species. You can use it to power up or evolve any Pokemon that you want, especially those that are hard to find or catch.
-
-
How to Win Battles and Raids
-
Battles and raids are challenging and rewarding activities that will test your skills as a trainer. You can battle other players in friendly battles or ranked battles, or team up with other players to defeat a powerful Pokemon in a raid. Here are some tips on how to win battles and raids:
-
-
Choose the right Pokemon for the battle or raid. The most important factor in winning a battle or raid is choosing the right Pokemon for the situation. You should consider the type, level, CP, IV, moves, and abilities of your Pokemon, as well as the type, level, CP, moves, and abilities of your opponent's Pokemon. You should also consider the weather, which can boost or weaken certain types of Pokemon and moves.
-
Type is the most basic and crucial element of Pokemon battles and raids. Each Pokemon and move has one or two types, such as fire, water, grass, electric, etc. Each type has strengths and weaknesses against other types, which can affect the damage dealt or received by a Pokemon or move. For example, fire-type Pokemon and moves are strong against grass-type Pokemon and moves, but weak against water-type Pokemon and moves. You should use type advantages and disadvantages to your favor by choosing Pokemon and moves that are effective against your opponent's Pokemon and moves.
-
Level, CP, IV, moves, and abilities are other factors that affect the performance of your Pokemon in battles and raids. Level is the measure of how much your Pokemon has grown and trained, and it affects its stats and CP. CP is the measure of how powerful your Pokemon is overall, and it is based on its stats and level. IV is the measure of how good your Pokemon's individual stats are, and it ranges from 0 to 15 for each stat. Moves are the actions that your Pokemon can perform in battles and raids, and they have different types, power, accuracy, energy cost, etc. Abilities are special traits that your Pokemon can have, and they can have various effects on battles and raids.
-
You should choose Pokemon that have high level, CP, IV, moves, and abilities for battles and raids. However, you should also consider the balance and synergy of your team. You should have a diverse team that can handle different types of opponents and situations. You should also have a team that can work well together by supporting each other with buffs, debuffs, heals, etc.
-
Use strategies and tactics to outsmart your opponent in battles and raids. Battles and raids are not just about brute force, but also about skill and strategy. You should use strategies and tactics to gain an edge over your opponent in battles and raids. For example, you can use switch tactics to change your active Pokemon when you have a type disadvantage or when you want to save a Pokemon for later. You can also use shield tactics to block your opponent's charged attacks or bait them into wasting their shields. You can also use energy tactics to manage your energy bar efficiently and unleash powerful charged attacks at the right time.
-
-
Compatibility and Updates of Pokemon Go
-
Pokemon Go is a game that requires constant updates and compatibility checks to run smoothly on your device. Here are some things you need to know about compatibility and updates of Pokemon Go:
-
What are the Device Requirements and Supported Platforms for Pokemon Go?
-
Pokemon Go is a game that requires a compatible device and platform to play. Here are the minimum device requirements for Android devices:
-
-
Android 6 or above
-
2 GB or more of RAM
-
Access to Google Play services
-
GPS and location services
-
Gyroscope and camera (optional)
-
-
Here are the minimum device requirements for iOS devices:
-
-
iOS 12 or above
-
iPhone 6s or above
-
iPad 5th generation or above
-
iPad mini 4 or above
-
iPad Air 2 or above
-
iPad Pro or above
-
iPod touch 7th generation or above
-
GPS and location services
-
Gyroscope and camera (optional)
-
-
Pokemon Go is supported on the following platforms:
-
-
Android devices that meet the minimum requirements
-
iOS devices that meet the minimum requirements
-
Samsung Galaxy Store devices that meet the minimum requirements
-
Pokemon Go Plus, which is a wearable device that connects to your smartphone via Bluetooth and lets you perform some actions in the game without looking at your screen
-
Pokemon Go Fest 2021 Print at Home Kit, which is a printable kit that lets you create your own immersive experience for the upcoming event
-
-
How to Update Pokemon Go to the Latest Version?
-
Pokemon Go is battling, trading, and raiding Pokemon. It is also a game that requires some tips and tricks to master, such as finding and catching rare Pokemon, using items and berries effectively, and winning battles and raids. It is also a game that requires compatibility and updates to run smoothly on your device, which you can check and download from the Google Play Store, the Apple App Store, the Samsung Galaxy Store, or other sources.
-
Pokemon Go is a game that will keep you entertained and engaged for hours, as you explore the world and catch Pokemon. It is a game that will also connect you with other players and communities, as you make friends, exchange gifts, and join forces. It is a game that will also challenge you and reward you, as you complete tasks, quests, events, and achievements. It is a game that will also surprise you and delight you, as you discover new Pokemon, forms, features, and more.
-
If you are a fan of Pokemon or just looking for a fun and immersive game to play on your Android device, you should definitely try Pokemon Go. You can download the apk file from APKCombo.com or other sources and install it on your device. You can also follow the official website, blog, Twitter, Facebook, Instagram, or YouTube of Pokemon Go to stay updated on the latest news and announcements. You can also join the Reddit, Discord, or Facebook communities of Pokemon Go to interact with other players and get tips and support.
-
So what are you waiting for? Grab your smartphone, download the Pokemon Go apk file, and start your adventure today!
-
FAQs
-
Here are some frequently asked questions and answers about Pokemon Go:
-
What are some common problems and solutions for Pokemon Go?
-
Some of the common problems that players might encounter while playing Pokemon Go are:
-
-
The game crashes or freezes: This might be caused by low memory, incompatible device, outdated software, or network issues. You can try to clear the cache, restart the device, update the app or the device software, switch to a different network, or reinstall the app.
-
The GPS signal is not found: This might be caused by poor GPS reception, inaccurate location settings, or interference from other devices. You can try to move to a different location, turn on high accuracy mode in your location settings, turn off Bluetooth or Wi-Fi scanning in your device settings, or recalibrate your compass.
-
The battery drains quickly: This might be caused by high screen brightness, background apps, or other device settings. You can try to lower the screen brightness, close the background apps, turn on battery saver mode in the game settings or the device settings, or use an external battery pack.
-
-
How to get free Pokecoins and other items in Pokemon Go?
-
Pokecoins are the premium currency in Pokemon Go that can be used to buy various items from the shop. There are two ways to get free Pokecoins in Pokemon Go:
-
-
Defend gyms: You can earn up to 50 Pokecoins per day by placing your Pokemon in gyms and defending them from other players. You will get 1 Pokecoin for every 10 minutes that your Pokemon stays in a gym, up to a maximum of 8 hours and 20 minutes per day.
-
Complete tasks: You can earn up to 20 Pokecoins per day by completing certain tasks that are given by Professor Willow or other characters. These tasks can vary from catching Pokemon to spinning PokeStops to battling other players.
-
-
Other items that you can get for free in Pokemon Go are:
-
-
Poke Balls, potions, revives, eggs, etc.: You can get these items by spinning PokeStops or gyms, opening gifts from friends, completing tasks or quests, participating in events or promotions, etc.
-
Berries, TMs, rare candy, etc.: You can get these items by participating in raids, completing tasks or quests, opening gifts from friends, earning rewards from battles, etc.
-
Stickers, clothing items, avatar poses, etc.: You can get these items by opening gifts from friends, completing tasks or quests, participating in events or promotions, buying them from the shop, etc.
-
-
How to connect Pokemon Go to other Pokemon games and devices?
-
Pokemon Go is a game that can be connected to other Pokemon games and devices to enhance your experience and unlock some benefits. Here are some of the ways to connect Pokemon Go to other Pokemon games and devices:
-
-
Pokemon Home: Pokemon Home is a service that lets you store and manage your Pokemon across different games and platforms. You can connect Pokemon Go to Pokemon Home and transfer your Pokemon from Pokemon Go to Pokemon Home. You can also transfer your Pokemon from Pokemon Home to other compatible games, such as Pokemon Sword and Shield or Pokemon Let's Go Pikachu and Eevee. However, you cannot transfer your Pokemon back from Pokemon Home to Pokemon Go. You can also get some rewards for connecting Pokemon Go to Pokemon Home, such as a Melmetal that can Gigantamax or a Mystery Box that spawns Meltan.
-
Pokemon Let's Go Pikachu and Eevee: Pokemon Let's Go Pikachu and Eevee are games for the Nintendo Switch that are based on the classic Pokemon Yellow game. You can connect Pokemon Go to Pokemon Let's Go Pikachu and Eevee and transfer your Kanto-region Pokemon from Pokemon Go to the games. You can also get some rewards for connecting Pokemon Go to the games, such as a special Pikachu or Eevee that can use exclusive moves or a Mystery Box that spawns Meltan.
-
Pokemon Sword and Shield: Pokemon Sword and Shield are games for the Nintendo Switch that are set in the Galar region. You can connect Pokemon Go to Pokemon Sword and Shield indirectly through Pokemon Home and transfer your Galarian-form Pokemon from Pokemon Go to the games. You can also get some rewards for transferring your Galarian-form Pokemon from Pokemon Go to the games, such as a Galarica Wreath that can evolve Galarian Slowpoke into Galarian Slowking.
-
Pokemon Fit Adventure: Pokemon Fit Adventure is a game for the Nintendo Switch that is designed to help you exercise and stay fit. You can connect Pokemon Go to Pokemon Fit Adventure and sync your steps and distance data from Pokemon Go to the game. You can also get some rewards for syncing your data from Pokemon Go to the game, such as coins, clothing items, or Pokemon encounters.
-
Pokemon Go Plus: Pokemon Go Plus is a wearable device that connects to your smartphone via Bluetooth and lets you perform some actions in the game without looking at your screen. You can use Pokemon Go Plus to catch Pokemon, spin PokeStops or gyms, track your steps and distance, etc. You can also customize the settings and notifications of Pokemon Go Plus through the app.
-
-
-
This is the end of the article on Pokemon Go apk. I hope you enjoyed reading it and learned something new. If you have any questions or feedback, please feel free to leave a comment below. Thank you for your time and attention.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Stick War 3 MOD APK The Most Epic Strategy Game with Unlimited Money and Free Soldiers.md b/spaces/congsaPfin/Manga-OCR/logs/Stick War 3 MOD APK The Most Epic Strategy Game with Unlimited Money and Free Soldiers.md
deleted file mode 100644
index d917ff8d8aa2fd125e39ef7de2cedd87f9c9086a..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Stick War 3 MOD APK The Most Epic Strategy Game with Unlimited Money and Free Soldiers.md
+++ /dev/null
@@ -1,140 +0,0 @@
-
-
Stick War 3 APK Mod Download: A Guide for Gamers
-
If you are a fan of strategy games, you might have heard of stick war 3, a popular game that lets you control an army of stick figures in a world where weapons are religion. Stick war 3 is a fun and addictive game that offers both single player and multiplayer modes, as well as a huge campaign with an engaging storyline. But what if you want to make your game experience even better? That's where an apk mod comes in.
-
An apk mod is a modified version of an original app that allows you to access features that are not available in the official version. For example, an apk mod can give you unlimited money, free soldiers, unlocked units, or enhanced graphics. By using an apk mod, you can enjoy the game without any limitations or restrictions.
In this article, we will show you the main features of stick war 3 apk mod download, how to download and install it on your device, and some FAQs that you might have. Let's get started!
-
Main Features of Stick War 3 APK Mod Download
-
Stick war 3 apk mod download offers many features that make the game more exciting and enjoyable. Here are some of them:
-
Real-Time Multiplayer Strategy PVP Matches
-
With stick war 3 apk mod download, you can take control of any unit at any time, team up with your friends, and battle it out in 2v2 matches. You can also challenge other players from around the world and show off your skills and strategies.
-
Custom Armies and Battle Decks
-
Stick war 3 apk mod download allows you to build your own battle decks from a growing selection of army types. You can collect and unlock different units, spells, enchantments, and upgrades, and customize them to suit your playstyle. You can also add generals of each nation, such as Prince Atreyos or Princess Kytchu.
-
Single Player Modes and Massive Campaign
-
If you prefer to play solo, stick war 3 apk mod download has plenty of options for you. You can play a huge ever-expanding campaign with multiple chapters and animated cutscenes. You can also practice your strategies against AI opponents, or try out different scenarios in the proving grounds or daily battles.
-
Customization and Live Replays
-
Stick war 3 apk mod download lets you customize your troops with unique skins, statues, voice-lines, and emotes. You can also watch and share live replays of your games, pause, rewind, fast forward, or switch views.
-
stick war 3 mod apk unlimited money
-stick war 3 free soldiers mod apk
-stick war 3 hack apk download
-stick war 3 pvp matches mod apk
-stick war 3 latest version mod apk
-stick war 3 mod apk android 1
-stick war 3 offline mod apk
-stick war 3 mod apk no ads
-stick war 3 mod apk rexdl
-stick war 3 mod apk revdl
-stick war 3 mod apk happymod
-stick war 3 mod apk an1
-stick war 3 mod apk apkpure
-stick war 3 mod apk apkmody
-stick war 3 mod apk mob.org
-stick war 3 mod apk unlimited gems
-stick war 3 mod apk free shopping
-stick war 3 mod apk unlocked everything
-stick war 3 mod apk god mode
-stick war 3 mod apk mega.nz
-stick war 3 mod apk mediafire
-stick war 3 mod apk zippyshare
-stick war 3 mod apk obb data
-stick war 3 mod apk online play
-stick war 3 mod apk multiplayer
-stick war 3 legacy mod apk download
-stick war 3 hacked version download
-download game stick war 3 mod apk
-download cheat stick war 3 mod apk
-download link for stick war 3 mod apk
-how to download stick war 3 mod apk
-how to install stick war 3 mod apk
-how to play stick war 3 mod apk
-how to update stick war 3 mod apk
-how to get stick war 3 mod apk for free
-best site to download stick war 3 mod apk
-best settings for stick war 3 mod apk
-best tips and tricks for stick war 3 mod apk
-best army types for stick war 3 mod apk
-best strategy for stick war 3 mod apk
-
How to Download and Install Stick War 3 APK Mod
-
If you want to try out stick war 3 apk mod download, here are some things you need to know:
-
Requirements and Precautions
-
-
You need an Android device with at least Android version 5.0 or higher.
-
You need to enable unknown sources on your device settings to allow the installation of apps from outside the Google Play Store.
-
You need to uninstall the original version of stick war 3 if you have it on your device.
-
You need to be careful when downloading apk mods from unknown sources, as they may contain viruses or malware that can harm your device or compromise your privacy.
-
You need to be aware that using apk mods may violate the terms of service of the game developer or publisher, and may result in bans
Steps to Follow
-
To download and install stick war 3 apk mod on your device, you need to follow these steps:
-
-
Click on one of the sources to download stick war 3 apk mod from, such as [5play](^1^) or [Happymod](^2^).
-
Wait for the download to finish and locate the apk file on your device.
-
Tap on the apk file and follow the instructions to install it.
-
Launch the game and enjoy the mod features.
-
-
Sources to Download From
-
There are many sources to download stick war 3 apk mod from, but not all of them are reliable or safe. You should always check the reviews, ratings, and comments of other users before downloading any apk mod. Here are some of the sources that we recommend:
-
-
-
Source
-
Description
-
Link
-
-
-
5play
-
A website that offers a variety of apk mods for different games and apps, including stick war 3. It has a user-friendly interface and fast download speed.
-
[5play](^1^)
-
-
-
Happymod
-
A platform that provides modded versions of popular games and apps, such as stick war 3. It has a large community of users and moderators who test and verify the mods.
-
[Happymod](^2^)
-
-
-
Conclusion
-
Stick war 3 is a great game for strategy lovers, but it can be even better with an apk mod. An apk mod can give you access to features that are not available in the official version, such as unlimited money, free soldiers, unlocked units, or enhanced graphics. By using an apk mod, you can enjoy the game without any limitations or restrictions.
-
In this article, we have shown you the main features of stick war 3 apk mod download, how to download and install it on your device, and some FAQs that you might have. We hope that this guide has been helpful and informative for you. If you want to try out stick war 3 apk mod download, you can use one of the sources that we have recommended above. Remember to be careful when downloading apk mods from unknown sources, as they may contain viruses or malware that can harm your device or compromise your privacy.
-
If you like stick war 3 and want to support the game developer and publisher, you can also buy the official version from the Google Play Store or the App Store. The official version may not have all the features that the apk mod has, but it is more secure and stable. You can also enjoy regular updates and new content from the game developer and publisher.
-
Whether you choose to play stick war 3 with or without an apk mod, we hope that you have fun and enjoy the game. Stick war 3 is a game that offers hours of entertainment and challenge for gamers of all ages and skill levels. It is a game that tests your creativity, strategy, and reflexes. It is a game that lets you create your own army of stick figures and lead them to victory against other nations.
-
So what are you waiting for? Download stick war 3 apk mod today and start your epic adventure!
-
FAQs
-
Here are some of the frequently asked questions that you might have about stick war 3 apk mod download:
-
What are the benefits of using stick war 3 apk mod?
-
The benefits of using stick war 3 apk mod are:
-
-
You can access features that are not available in the official version, such as unlimited money, free soldiers, unlocked units, or enhanced graphics.
-
You can enjoy the game without any limitations or restrictions.
-
You can customize your troops with unique skins, statues, voice-lines, and emotes.
-
You can watch and share live replays of your games.
-
You can team up with your friends and challenge other players from around the world.
-
-
Is stick war 3 apk mod safe and legal?
-
The safety and legality of stick war 3 apk mod depend on several factors:
-
-
The source that you download it from. You should always check the reviews, ratings, and comments of other users before downloading any apk mod. You should also scan the apk file with an antivirus software before installing it.
-
The terms of service of the game developer or publisher. You should be aware that using apk mods may violate the terms of service of the game developer or publisher, and may result in bans or legal actions. You should respect the intellectual property rights of the game developer or publisher, and support them by buying the official version of the game.
-
The device that you use it on. You should make sure that your device meets the minimum requirements and has enough storage space to run the apk mod. You should also backup your data and files before installing the apk mod, in case something goes wrong.
-
-
How can I update stick war 3 apk mod?
-
To update stick war 3 apk mod, you need to follow these steps:
-
-
Check if there is a new version of the apk mod available from the source that you downloaded it from.
-
Download the new version of the apk mod and uninstall the old one.
-
Install the new version of the apk mod and launch the game.
-
-
Note that updating the apk mod may erase your progress or data, so you should backup your files before updating.
-
What are some tips and tricks for playing stick war 3?
-
Here are some tips and tricks that can help you improve your gameplay and strategy in stick war 3:
-
-
Learn the strengths and weaknesses of each unit type, and use them accordingly. For example, archers are good at long range, but weak at close combat. Speartons are good at defending, but slow at moving. Magikill are good at casting spells, but vulnerable to attacks.
-
Balance your economy and military. You need to collect gold and mana to build and upgrade your units, but you also need to train and deploy your units to attack and defend. You should not spend all your resources on one aspect, but rather distribute them wisely.
-
Use spells and enchantments wisely. Spells and enchantments can give you an edge in battle, but they also cost mana and have cooldowns. You should use them when they are most effective, such as when you have a large army or when you face a strong enemy.
-
Watch and learn from other players. You can watch live replays of other players' games, or join a clan and chat with other players. You can learn from their strategies, tactics, and mistakes, and apply them to your own games.
-
-
Where can I find more information about stick war 3?
-
If you want to find more information about stick war 3, such as news, updates, guides, or forums, you can visit these websites:
-
-
[Stick War 3 Official Website]: The official website of the game developer and publisher, where you can find the latest news, updates, features, and support for the game.
-
[Stick War 3 Wiki]: A fan-made wiki that contains detailed information about the game, such as units, spells, enchantments, generals, modes, maps, and more.
-
[Stick War 3 Reddit]: A subreddit dedicated to the game, where you can find discussions, questions, answers, tips, tricks, memes, fan art, and more.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Unlimited Money and Gems in Talking Tom Gold Run Mod Apk v6.4.0.2467 - Download Now.md b/spaces/congsaPfin/Manga-OCR/logs/Unlimited Money and Gems in Talking Tom Gold Run Mod Apk v6.4.0.2467 - Download Now.md
deleted file mode 100644
index b8bc6f46c8aeb85244beedddd1e86b0ae336b670..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Unlimited Money and Gems in Talking Tom Gold Run Mod Apk v6.4.0.2467 - Download Now.md
+++ /dev/null
@@ -1,98 +0,0 @@
-
-
Talking Tom Gold Run Mod APK 2023: Unlimited Money and Fun
-
Do you love running games? Do you want to join Talking Tom and his friends in an endless chase for gold? Do you want to enjoy unlimited money and diamonds, unlock all characters and outfits, explore different worlds and themes, enjoy dynamic and vivid graphics, and compete with other players online? If your answer is yes, then you should download Talking Tom Gold Run Mod APK 2023 right now!
-
Introduction
-
In this article, we will tell you everything you need to know about Talking Tom Gold Run Mod APK 2023, including what it is, why you should download it, what features it offers, and how to download and install it on your device. So, without further ado, let's get started!
Talking Tom Gold Run is a popular running game developed by Outfit7 Limited, the creators of the famous Talking Tom and Friends series. In this game, you have to run after a robber who has stolen your gold and avoid various obstacles along the way. You can also collect gold bars, diamonds, boosters, and power-ups to enhance your gameplay. You can also customize your character with different outfits and accessories, and unlock new characters such as Angela, Hank, Ginger, Ben, and more. You can also explore different worlds and themes such as city streets, subway tunnels, tropical beaches, Chinese temples, snowy mountains, and more.
-
What is Talking Tom Gold Run Mod APK?
-
Talking Tom Gold Run Mod APK is a modified version of the original game that gives you access to unlimited money and diamonds, unlocks all characters and outfits, removes ads, and provides other benefits. With this mod apk, you can enjoy the game without any limitations or restrictions. You can buy anything you want from the shop, upgrade your character's skills and abilities, unlock new worlds and themes, and have more fun than ever.
-
Why should you download Talking Tom Gold Run Mod APK 2023?
-
There are many reasons why you should download Talking Tom Gold Run Mod APK 2023. Here are some of them:
-
-
You can enjoy unlimited money and diamonds that you can use to buy anything from the shop.
-
You can unlock all characters and outfits that you can use to customize your character.
-
You can explore different worlds and themes that offer different challenges and environments.
-
You can enjoy dynamic and vivid graphics that make the game more realistic and immersive.
-
You can compete with other players online and see who can run the farthest.
-
-
Features of Talking Tom Gold Run Mod APK 2023
-
Talking Tom Gold Run Mod APK 2023 offers many features that make the game more enjoyable and exciting. Here are some of them:
-
Unlimited money and diamonds
-
With this mod apk, you will never run out of money or diamonds. You can use them to buy anything from the shop, such as boosters, power-ups, upgrades, outfits, accessories, and more. You can also use them to unlock new characters such as Angela, Hank, Ginger, Ben, and more. You can also use them to unlock new worlds and themes such as city streets, subway tunnels, tropical beaches, Chinese temples, snowy mountains, and more.
Unlock all characters and outfits
-
With this mod apk, you can unlock all the characters and outfits that are available in the game. You can choose from Talking Tom, Angela, Hank, Ginger, Ben, and more. Each character has their own personality and voice. You can also customize your character with different outfits and accessories, such as hats, glasses, shoes, shirts, pants, dresses, and more. You can mix and match different items to create your own unique style.
-
Explore different worlds and themes
-
With this mod apk, you can explore different worlds and themes that offer different challenges and environments. You can run through city streets, subway tunnels, tropical beaches, Chinese temples, snowy mountains, and more. Each world has its own obstacles, enemies, and scenery. You can also collect different items and coins in each world. You can also switch between different worlds and themes as you wish.
-
talking tom gold run hack apk unlimited money and gems
-download talking tom gold run mod apk latest version
-talking tom gold run mod apk android 1
-how to install talking tom gold run mod apk
-talking tom gold run mod apk rexdl
-talking tom gold run mod apk revdl
-talking tom gold run mod apk happymod
-talking tom gold run mod apk free shopping
-talking tom gold run mod apk unlimited everything
-talking tom gold run mod apk all characters unlocked
-talking tom gold run mod apk offline
-talking tom gold run mod apk no ads
-talking tom gold run mod apk unlimited coins and diamonds
-talking tom gold run mod apk for pc
-talking tom gold run mod apk ios
-talking tom gold run mod apk 2023 download
-talking tom gold run mod apk 2023 update
-talking tom gold run mod apk 2023 new features
-talking tom gold run mod apk 2023 gameplay
-talking tom gold run mod apk 2023 review
-talking tom gold run cheats 2023
-talking tom gold run tips and tricks 2023
-talking tom gold run guide 2023
-talking tom gold run walkthrough 2023
-talking tom gold run best characters 2023
-talking tom gold run best outfits 2023
-talking tom gold run best vehicles 2023
-talking tom gold run best worlds 2023
-talking tom gold run best missions 2023
-talking tom gold run best events 2023
-talking tom gold run vs subway surfers 2023
-talking tom gold run vs temple run 2 2023
-talking tom gold run vs sonic dash 2 2023
-talking tom gold run vs minion rush 2023
-talking tom gold run vs angry gran run 2023
-is talking tom gold run safe for kids 2023
-is talking tom gold run online or offline 2023
-is talking tom gold run free or paid 2023
-is talking tom gold run fun or boring 2023
-is talking tom gold run easy or hard 2023
-
Enjoy dynamic and vivid graphics
-
With this mod apk, you can enjoy dynamic and vivid graphics that make the game more realistic and immersive. The game uses 3D animation and high-quality graphics to create a stunning visual experience. You can see the details of the characters, the environments, the effects, and the movements. You can also enjoy the smooth and fast gameplay that does not lag or crash.
-
Compete with other players online
-
With this mod apk, you can compete with other players online and see who can run the farthest. You can connect with your friends or other players from around the world. You can see their scores and rankings on the leaderboard. You can also chat with them and send them messages. You can also challenge them to a race and see who is faster.
-
How to download and install Talking Tom Gold Run Mod APK 2023
-
If you want to download and install Talking Tom Gold Run Mod APK 2023 on your device, you need to follow these simple steps:
-
Step 1: Download the mod apk file from a trusted source
-
The first step is to download the mod apk file from a trusted source. You can use the link below to download the latest version of Talking Tom Gold Run Mod APK 2023. The file size is about 90 MB, so make sure you have enough space on your device.
Step 2: Enable unknown sources on your device settings
-
The second step is to enable unknown sources on your device settings. This will allow you to install apps that are not from the Google Play Store. To do this, go to your device settings > security > unknown sources > enable.
-
Step 3: Install the mod apk file and launch the game
-
The third step is to install the mod apk file and launch the game. To do this, locate the downloaded file on your device storage > tap on it > install > open. The game will start automatically and you can enjoy all the features of Talking Tom Gold Run Mod APK 2023.
-
Conclusion
-
Talking Tom Gold Run Mod APK 2023 is a great running game that offers unlimited money and fun. You can enjoy all the features of the game without any limitations or restrictions. You can unlock all characters and outfits, explore different worlds and themes, enjoy dynamic and vivid graphics, and compete with other players online. You can also download and install Talking Tom Gold Run Mod APK 2023 easily by following the steps above. So, what are you waiting for? Download Talking Tom Gold Run Mod APK 2023 now and join Talking Tom and his friends in an endless chase for gold!
-
FAQs
-
Here are some frequently asked questions about Talking Tom Gold Run Mod APK 2023:
-
-
Is Talking Tom Gold Run Mod APK 2023 safe to use?
-
Yes, Talking Tom Gold Run Mod APK 2023 is safe to use. It does not contain any viruses or malware that can harm your device or data. However, you should always download it from a trusted source and scan it before installing it.
-
Is Talking Tom Gold Run Mod APK 2023 compatible with my device?
-
Talking Tom Gold Run Mod APK 2023 is compatible with most Android devices that have Android 4.4 or higher versions. However, some devices may not support some features or functions of the game due to hardware or software limitations.
-
Can I update Talking Tom Gold Run Mod APK 2023?
-
Talking Tom Gold Run Mod APK Talking Tom Gold Run Mod APK 2023 is updated regularly to fix bugs and improve performance. However, you may not be able to update it from the Google Play Store, as it is a modified version of the original game. To update it, you need to download the latest version of the mod apk file from the same source that you downloaded it from before and install it over the existing one. You can also check for updates on our website or follow us on social media for the latest news and updates.
-
Will Talking Tom Gold Run Mod APK 2023 affect my progress in the original game?
-
No, Talking Tom Gold Run Mod APK 2023 will not affect your progress in the original game. The mod apk file is installed separately from the original game and does not interfere with it. You can play both games on the same device without any problems. However, you should not use the same account or login details for both games, as this may cause conflicts or errors.
-
Can I play Talking Tom Gold Run Mod APK 2023 offline?
-
Yes, you can play Talking Tom Gold Run Mod APK 2023 offline. You do not need an internet connection to play the game, except for some features that require online access, such as competing with other players online, watching ads, or downloading additional data. You can enjoy the game offline without any limitations or restrictions.
-
-
I hope this article has answered all your questions about Talking Tom Gold Run Mod APK 2023. If you have any more questions or feedback, please feel free to leave a comment below or contact us via email. Thank you for reading and happy gaming!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/slam Evi Peymbrin (s.a.v) hyat hli-beyti v shablri haqqnda maraql faktlar.md b/spaces/congsaPfin/Manga-OCR/logs/slam Evi Peymbrin (s.a.v) hyat hli-beyti v shablri haqqnda maraql faktlar.md
deleted file mode 100644
index 34f324a31184b68b9ba82a18989a3921740c6bf3..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/slam Evi Peymbrin (s.a.v) hyat hli-beyti v shablri haqqnda maraql faktlar.md
+++ /dev/null
@@ -1,134 +0,0 @@
-
-
What is Islamevi and why you should visit it
-
If you are a Muslim living in Azerbaijan or interested in learning more about Islam, you might have heard of Islamevi. But what is Islamevi and why should you visit it? In this article, we will answer these questions and show you how Islamevi can enrich your life with Islamic knowledge, guidance, and community.
Islamevi, which means "the home of Islam" in Azerbaijani, is a non-profit organization that aims to spread the message of Islam and serve the Muslim community in Azerbaijan. It was founded in 2018 by a group of young Muslims who wanted to create a platform where Muslims can learn, practice, and share their faith.
-
The mission and vision of Islamevi
-
The mission of Islamevi is to provide authentic and reliable information about Islam, based on the Quran and the Sunnah, to the Azerbaijani people. It also strives to promote Islamic values, morals, and ethics in the society, and to foster unity and cooperation among Muslims.
-
The vision of Islamevi is to become a leading Islamic organization in Azerbaijan that contributes to the spiritual, intellectual, and social development of the Muslim community. It also hopes to inspire more people to embrace Islam and to live according to its teachings.
-
The services and activities of Islamevi
-
Islamevi offers a variety of services and activities for Muslims of all ages, backgrounds, and interests. Some of these include:
-
-
Articles on various topics related to Islam, such as beliefs, practices, history, culture, ethics, etc.
-
Question-and-answer sessions where Muslims can ask their doubts and queries about Islam and get answers from qualified scholars.
-
Online courses and webinars on Islamic sciences, such as Quran, Hadith, Fiqh, Aqeedah, etc.
-
Offline classes and workshops on Islamic subjects, such as Arabic language, Tajweed, Tafsir, etc.
-
Events and programs on Islamic occasions, such as Ramadan, Eid, Qurban, etc.
-
Social media posts and videos that share Islamic reminders, stories, quotes, etc.
-
Charity and humanitarian projects that help the needy and the oppressed in Azerbaijan and abroad.
-
-
How to access Islamevi online and offline
-
Islamevi has a strong online presence as well as a physical location where Muslims can visit and benefit from its services. Here are some ways to access Islamevi online and offline:
-
islamevi.az dini məqalələr
-islamevi.az quran və hədis
-islamevi.az ramazan ayı
-islamevi.az fitrə zəkatı
-islamevi.az dua və zikrlər
-islamevi.az əsməul husnə
-islamevi.az peyğəmbərin həyatı
-islamevi.az islam tarixi
-islamevi.az islam fiqhi
-islamevi.az müsəlmanın əqidəsi
-islamevi.az islam əxlaqı
-islamevi.az islamda ailə
-islamevi.az mömin qadın
-islamevi.az axirət dünyası
-islamevi.az orucla bağlı suallar
-islamevi.az bayram namazı
-islamevi.az novruz bayramı
-islamevi.az təravih namazı
-islamevi.az qurban kəsmək
-islamevi.az sədəqə vermək
-islamevi.az instagram hesabı
-islamevi.az facebook səhifəsi
-islamevi.az youtube kanalı
-islamevi.az dini kitablar yükləmək
-islamevi.az dini proqramlar yükləmək
-islamevi.az dini şe'rlər oxumaq
-islamevi.az sual və təkliflər göndərmək
-islamevi.az haqqımızda məlumat almaq
-islamevi.az dini araşdırmalar oxumaq
-islamevi.az sual-cavab bölməsi istifadə etmək
-islamevi.az dini tv verilişləri izlәmәk
-islamevi.az sәsli dualar dinlәmәk
-islamevi.az arazda ramazan oxumaq
-islamevi.az qurani kәrim vә tәfsiri oxumaq
-islamevi.az sәhih hәkayәlәr oxumaq
-islamevi.az peyğәmbәrin sәhabәlәri haqqında oxumaq
-islamevi.az peyğәmbәrin әhli-beyti haqqında oxumaq
-islamevi.az orta әsr islam alimlәrimiz haqqında oxumaq
-islamevi.az islamda tarixi hadisәlәrin tәhlili oxumaq
-islamevi.az hz aişәnin muhәmmәd peyğәmbәr ilә evliliyi haqqında oxumaq
-
The website of Islamevi
-
The website of Islamevi is www.islamevi.az, where you can find all the articles, questions-and-answers, courses, webinars, events, programs, social media links, charity projects, contact details, and more. You can also subscribe to their newsletter to get updates on their latest activities.
The physical location and contact details of Islamevi
-
Islamevi has a center in Baku, the capital city of Azerbaijan, where you can visit and join their classes, workshops, events, and programs. The address of the center is: İslam Evi, Nizami küç. 123, Bakı AZ1000. You can also call them at +994 12 345 67 89 or email them at info@islamevi.az.
-
The benefits of following Islamevi
-
Following Islamevi can bring many benefits to your life as a Muslim or a seeker of truth. Here are some of them:
-
Learn more about Islam and its teachings
-
Islamevi provides you with authentic and reliable information about Islam and its teachings, based on the Quran and the Sunnah. You can learn more about the basics of Islam, such as the pillars of faith and practice, the articles of belief, the sources of legislation, etc. You can also learn more about the advanced topics of Islam, such as the sciences of Quran, Hadith, Fiqh, Aqeedah, etc. You can also learn more about the history, culture, and civilization of Islam and its contributions to humanity.
-
Connect with other Muslims and share your experiences
-
Islamevi connects you with other Muslims who share your faith and values. You can join their online community and interact with them through their website and social media accounts. You can also join their offline community and meet them in person at their center or events. You can share your experiences, challenges, joys, and sorrows with them. You can also support them, advise them, and learn from them.
-
Participate in various events and programs organized by Islamevi
-
Islamevi organizes various events and programs for Muslims throughout the year. Some of these include:
-
-
-
Name
-
Description
-
Date
-
-
-
Ramadan Program
-
A series of lectures, webinars, quizzes, competitions, and charity projects related to Ramadan.
-
The whole month of Ramadan.
-
-
-
Eid Festival
-
A celebration of Eid al-Fitr and Eid al-Adha with prayers, games, food, gifts, and entertainment.
-
The first day of Shawwal and the tenth day of Dhul-Hijjah.
-
-
-
Quran Competition
-
A competition to test the memorization and recitation skills of the participants.
-
The last week of Rajab.
-
-
-
Hajj Workshop
-
A workshop to teach the rules and rituals of Hajj and Umrah.
-
The first week of Dhul-Qadah.
-
-
-
Mawlid Celebration
-
A celebration of the birth of Prophet Muhammad (peace be upon him) with songs, poems, stories, and lectures.
-
The twelfth day of Rabi al-Awwal.
-
-
-
Conclusion
-
Islamevi is a home for Muslims in Azerbaijan that provides them with Islamic knowledge, guidance, and community. It is a platform where Muslims can learn, practice, and share their faith. It is also a place where non-Muslims can discover the beauty and wisdom of Islam. If you are looking for a reliable source of Islamic information and a supportive network of Muslim friends, you should visit Islamevi today.
-
FAQs
-
Q: Is Islamevi affiliated with any political or sectarian group?
-
A: No, Islamevi is an independent organization that follows the Quran and the Sunnah according to the understanding of the righteous predecessors (Salaf). It does not belong to any political or sectarian group or agenda
A: No, Islamevi is an independent organization that follows the Quran and the Sunnah according to the understanding of the righteous predecessors (Salaf). It does not belong to any political or sectarian group or agenda.
-
Q: How can I support Islamevi financially?
-
A: You can support Islamevi financially by donating to their charity and humanitarian projects, such as feeding the poor, helping the orphans, supporting the refugees, etc. You can also sponsor their events and programs, such as Ramadan program, Eid festival, Quran competition, etc. You can donate online through their website or offline at their center.
-
Q: How can I volunteer for Islamevi?
-
A: You can volunteer for Islamevi by offering your skills, time, and energy to help them in their various activities. You can join their team of writers, editors, translators, designers, developers, teachers, organizers, etc. You can also help them in spreading their message and inviting more people to their platform. You can contact them through their website or social media accounts to express your interest and availability.
-
Q: How can I contact Islamevi for any questions or feedback?
-
A: You can contact Islamevi for any questions or feedback through their website or social media accounts. You can also call them at +994 12 345 67 89 or email them at info@islamevi.az. They are always happy to hear from you and to assist you in any way possible.
-
Q: What are some of the challenges and opportunities that Islamevi faces?
-
A: Some of the challenges that Islamevi faces are:
-
-
Lack of awareness and misconceptions about Islam among the Azerbaijani people.
-
Lack of resources and funding to sustain and expand their services and activities.
-
Lack of qualified and committed staff and volunteers to run their operations and projects.
-
-
Some of the opportunities that Islamevi has are:
-
-
High demand and interest for Islamic education and guidance among the Azerbaijani people.
-
High potential and talent of the young Muslim generation in Azerbaijan.
-
High support and cooperation from the government and other Islamic organizations in Azerbaijan.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Ansys 10.0 Software Free Download [HOT].md b/spaces/contluForse/HuggingGPT/assets/Ansys 10.0 Software Free Download [HOT].md
deleted file mode 100644
index f75407188373fdd382764aa4713ebad492c96287..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Ansys 10.0 Software Free Download [HOT].md
+++ /dev/null
@@ -1,20 +0,0 @@
-
-
-Visit the website to learn more.
-
-From planning and simulation to real-world application, this comprehensive collection of over 400 case studies from the past three decades is designed to provide readers with practical and applicable solutions.
-
-This work-study supplement provides complete, sound and interactive answers to all questions, contains the same essential content as the full paper-and-pencil version, and is an invaluable supplement for students working towards the SAE 2003 certification or any other course in automotive technology.
-
-An introductory module to systems dynamics. This is the first book to apply systems dynamics to project management. It helps readers to understand the basic concepts of system dynamics and how to apply them to the project environment.
-
-This book explores the relationship between knowledge management and intellectual capital. The book looks at why companies have adopted knowledge management and explores how organisations can build intellectual capital. It looks at the factors that determine knowledge sharing and the impact of intellectual capital on the knowledge economy. It examines the definition of intellectual capital and outlines the benefits that knowledge sharing can bring to the organisation. This book argues that knowledge management is, in fact, the application of intellectual capital. The book concludes with an examination of the concepts of knowledge and the importance of intellectual capital and then examines how organisations can build their intellectual capital.
-
-This book is aimed at students studying Biotechnology for the degree of the BSc (Hons) Applied Biosciences or the undergraduate Diploma in Applied Biosciences. It provides up-to-date information in the field of applied biosciences for students in these courses and helps to develop essential lab skills. The book covers the main aspects of biotechnology for the students at all levels from those in their first year studying Biotechnology.
-
-This book provides a platform for the organised growth of companies. It provides a coherent framework for understanding the issues involved in developing and managing a business, and the practical skills required. It highlights the importance of strategy, managing from the centre, leadership, risk assessment, financial performance and controlling and reporting. It provides easy to follow models, case studies and exercises to reinforce concepts and practise them in an applied context.
-
-The book examines the working of the European Unions internal market and the reform proposals to improve the functioning of the internal market, which includes revising the treaties, establishing a new treaty and operating mechanism, setting out priorities for a new legal framework and how to influence policy. This book also investigates the reform proposals for the Single Market of Services and for Data Protection. The book 4fefd39f24
-
-
-
diff --git a/spaces/contluForse/HuggingGPT/assets/DC.Comics.-.Bombshells.004..2015...digital...Minutemen-Thoth..cbr..-.Nem.-..md b/spaces/contluForse/HuggingGPT/assets/DC.Comics.-.Bombshells.004..2015...digital...Minutemen-Thoth..cbr..-.Nem.-..md
deleted file mode 100644
index 5f5c0c41b9834423f4043ecc922d0b79b07bfa5f..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/DC.Comics.-.Bombshells.004..2015...digital...Minutemen-Thoth..cbr..-.Nem.-..md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/image/io.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/image/io.py
deleted file mode 100644
index 4e8f1877978840aede93774d86643b129751db13..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/image/io.py
+++ /dev/null
@@ -1,258 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import io
-import os.path as osp
-from pathlib import Path
-
-import cv2
-import numpy as np
-from cv2 import (IMREAD_COLOR, IMREAD_GRAYSCALE, IMREAD_IGNORE_ORIENTATION,
- IMREAD_UNCHANGED)
-
-from annotator.mmpkg.mmcv.utils import check_file_exist, is_str, mkdir_or_exist
-
-try:
- from turbojpeg import TJCS_RGB, TJPF_BGR, TJPF_GRAY, TurboJPEG
-except ImportError:
- TJCS_RGB = TJPF_GRAY = TJPF_BGR = TurboJPEG = None
-
-try:
- from PIL import Image, ImageOps
-except ImportError:
- Image = None
-
-try:
- import tifffile
-except ImportError:
- tifffile = None
-
-jpeg = None
-supported_backends = ['cv2', 'turbojpeg', 'pillow', 'tifffile']
-
-imread_flags = {
- 'color': IMREAD_COLOR,
- 'grayscale': IMREAD_GRAYSCALE,
- 'unchanged': IMREAD_UNCHANGED,
- 'color_ignore_orientation': IMREAD_IGNORE_ORIENTATION | IMREAD_COLOR,
- 'grayscale_ignore_orientation':
- IMREAD_IGNORE_ORIENTATION | IMREAD_GRAYSCALE
-}
-
-imread_backend = 'cv2'
-
-
-def use_backend(backend):
- """Select a backend for image decoding.
-
- Args:
- backend (str): The image decoding backend type. Options are `cv2`,
- `pillow`, `turbojpeg` (see https://github.com/lilohuang/PyTurboJPEG)
- and `tifffile`. `turbojpeg` is faster but it only supports `.jpeg`
- file format.
- """
- assert backend in supported_backends
- global imread_backend
- imread_backend = backend
- if imread_backend == 'turbojpeg':
- if TurboJPEG is None:
- raise ImportError('`PyTurboJPEG` is not installed')
- global jpeg
- if jpeg is None:
- jpeg = TurboJPEG()
- elif imread_backend == 'pillow':
- if Image is None:
- raise ImportError('`Pillow` is not installed')
- elif imread_backend == 'tifffile':
- if tifffile is None:
- raise ImportError('`tifffile` is not installed')
-
-
-def _jpegflag(flag='color', channel_order='bgr'):
- channel_order = channel_order.lower()
- if channel_order not in ['rgb', 'bgr']:
- raise ValueError('channel order must be either "rgb" or "bgr"')
-
- if flag == 'color':
- if channel_order == 'bgr':
- return TJPF_BGR
- elif channel_order == 'rgb':
- return TJCS_RGB
- elif flag == 'grayscale':
- return TJPF_GRAY
- else:
- raise ValueError('flag must be "color" or "grayscale"')
-
-
-def _pillow2array(img, flag='color', channel_order='bgr'):
- """Convert a pillow image to numpy array.
-
- Args:
- img (:obj:`PIL.Image.Image`): The image loaded using PIL
- flag (str): Flags specifying the color type of a loaded image,
- candidates are 'color', 'grayscale' and 'unchanged'.
- Default to 'color'.
- channel_order (str): The channel order of the output image array,
- candidates are 'bgr' and 'rgb'. Default to 'bgr'.
-
- Returns:
- np.ndarray: The converted numpy array
- """
- channel_order = channel_order.lower()
- if channel_order not in ['rgb', 'bgr']:
- raise ValueError('channel order must be either "rgb" or "bgr"')
-
- if flag == 'unchanged':
- array = np.array(img)
- if array.ndim >= 3 and array.shape[2] >= 3: # color image
- array[:, :, :3] = array[:, :, (2, 1, 0)] # RGB to BGR
- else:
- # Handle exif orientation tag
- if flag in ['color', 'grayscale']:
- img = ImageOps.exif_transpose(img)
- # If the image mode is not 'RGB', convert it to 'RGB' first.
- if img.mode != 'RGB':
- if img.mode != 'LA':
- # Most formats except 'LA' can be directly converted to RGB
- img = img.convert('RGB')
- else:
- # When the mode is 'LA', the default conversion will fill in
- # the canvas with black, which sometimes shadows black objects
- # in the foreground.
- #
- # Therefore, a random color (124, 117, 104) is used for canvas
- img_rgba = img.convert('RGBA')
- img = Image.new('RGB', img_rgba.size, (124, 117, 104))
- img.paste(img_rgba, mask=img_rgba.split()[3]) # 3 is alpha
- if flag in ['color', 'color_ignore_orientation']:
- array = np.array(img)
- if channel_order != 'rgb':
- array = array[:, :, ::-1] # RGB to BGR
- elif flag in ['grayscale', 'grayscale_ignore_orientation']:
- img = img.convert('L')
- array = np.array(img)
- else:
- raise ValueError(
- 'flag must be "color", "grayscale", "unchanged", '
- f'"color_ignore_orientation" or "grayscale_ignore_orientation"'
- f' but got {flag}')
- return array
-
-
-def imread(img_or_path, flag='color', channel_order='bgr', backend=None):
- """Read an image.
-
- Args:
- img_or_path (ndarray or str or Path): Either a numpy array or str or
- pathlib.Path. If it is a numpy array (loaded image), then
- it will be returned as is.
- flag (str): Flags specifying the color type of a loaded image,
- candidates are `color`, `grayscale`, `unchanged`,
- `color_ignore_orientation` and `grayscale_ignore_orientation`.
- By default, `cv2` and `pillow` backend would rotate the image
- according to its EXIF info unless called with `unchanged` or
- `*_ignore_orientation` flags. `turbojpeg` and `tifffile` backend
- always ignore image's EXIF info regardless of the flag.
- The `turbojpeg` backend only supports `color` and `grayscale`.
- channel_order (str): Order of channel, candidates are `bgr` and `rgb`.
- backend (str | None): The image decoding backend type. Options are
- `cv2`, `pillow`, `turbojpeg`, `tifffile`, `None`.
- If backend is None, the global imread_backend specified by
- ``mmcv.use_backend()`` will be used. Default: None.
-
- Returns:
- ndarray: Loaded image array.
- """
-
- if backend is None:
- backend = imread_backend
- if backend not in supported_backends:
- raise ValueError(f'backend: {backend} is not supported. Supported '
- "backends are 'cv2', 'turbojpeg', 'pillow'")
- if isinstance(img_or_path, Path):
- img_or_path = str(img_or_path)
-
- if isinstance(img_or_path, np.ndarray):
- return img_or_path
- elif is_str(img_or_path):
- check_file_exist(img_or_path,
- f'img file does not exist: {img_or_path}')
- if backend == 'turbojpeg':
- with open(img_or_path, 'rb') as in_file:
- img = jpeg.decode(in_file.read(),
- _jpegflag(flag, channel_order))
- if img.shape[-1] == 1:
- img = img[:, :, 0]
- return img
- elif backend == 'pillow':
- img = Image.open(img_or_path)
- img = _pillow2array(img, flag, channel_order)
- return img
- elif backend == 'tifffile':
- img = tifffile.imread(img_or_path)
- return img
- else:
- flag = imread_flags[flag] if is_str(flag) else flag
- img = cv2.imread(img_or_path, flag)
- if flag == IMREAD_COLOR and channel_order == 'rgb':
- cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img)
- return img
- else:
- raise TypeError('"img" must be a numpy array or a str or '
- 'a pathlib.Path object')
-
-
-def imfrombytes(content, flag='color', channel_order='bgr', backend=None):
- """Read an image from bytes.
-
- Args:
- content (bytes): Image bytes got from files or other streams.
- flag (str): Same as :func:`imread`.
- backend (str | None): The image decoding backend type. Options are
- `cv2`, `pillow`, `turbojpeg`, `None`. If backend is None, the
- global imread_backend specified by ``mmcv.use_backend()`` will be
- used. Default: None.
-
- Returns:
- ndarray: Loaded image array.
- """
-
- if backend is None:
- backend = imread_backend
- if backend not in supported_backends:
- raise ValueError(f'backend: {backend} is not supported. Supported '
- "backends are 'cv2', 'turbojpeg', 'pillow'")
- if backend == 'turbojpeg':
- img = jpeg.decode(content, _jpegflag(flag, channel_order))
- if img.shape[-1] == 1:
- img = img[:, :, 0]
- return img
- elif backend == 'pillow':
- buff = io.BytesIO(content)
- img = Image.open(buff)
- img = _pillow2array(img, flag, channel_order)
- return img
- else:
- img_np = np.frombuffer(content, np.uint8)
- flag = imread_flags[flag] if is_str(flag) else flag
- img = cv2.imdecode(img_np, flag)
- if flag == IMREAD_COLOR and channel_order == 'rgb':
- cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img)
- return img
-
-
-def imwrite(img, file_path, params=None, auto_mkdir=True):
- """Write image to file.
-
- Args:
- img (ndarray): Image array to be written.
- file_path (str): Image file path.
- params (None or list): Same as opencv :func:`imwrite` interface.
- auto_mkdir (bool): If the parent folder of `file_path` does not exist,
- whether to create it automatically.
-
- Returns:
- bool: Successful or not.
- """
- if auto_mkdir:
- dir_name = osp.abspath(osp.dirname(file_path))
- mkdir_or_exist(dir_name)
- return cv2.imwrite(file_path, img, params)
diff --git a/spaces/course-demos/whisper-small/README.md b/spaces/course-demos/whisper-small/README.md
deleted file mode 100644
index db18712bd3b533bd10b9e7f18057c59d6acf2641..0000000000000000000000000000000000000000
--- a/spaces/course-demos/whisper-small/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Whisper Small
-emoji: 🌍
-colorFrom: pink
-colorTo: pink
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/cybercorejapan/human-detection-docker/models/detectors/mmyolov8.py b/spaces/cybercorejapan/human-detection-docker/models/detectors/mmyolov8.py
deleted file mode 100644
index 5814748777cd3e262b268d1de9ee7637f4df79c9..0000000000000000000000000000000000000000
--- a/spaces/cybercorejapan/human-detection-docker/models/detectors/mmyolov8.py
+++ /dev/null
@@ -1,86 +0,0 @@
-from typing import List, Dict, Tuple
-import numpy as np
-import torch
-from .yolov7 import YOLOBase
-from models.base.trt_base import TRT_Base
-from models.base.onnx_base import ONNX_Base
-
-class MMYOLOv8TRT(TRT_Base, YOLOBase):
- def __init__(self,
- preprocess_cfg: Dict=dict(
- border_color=(114, 114, 114),
- auto=False,
- scaleFill=True,
- scaleup=True,
- stride=32),
- nms_agnostic_cfg: Dict=dict(
- type='nms',
- iou_threshold=0.9,
- class_agnostic=True),
- score_thr=0.1,
- img_shape: Tuple[int, int]=(640, 640),
- batch_size: int=32,
- model_path: str="",
- device: str='0',):
- """ YOLOv8 TRT class for inference, which is based on TRT_Base and YOLOBase.
- """
- self.img_shape = img_shape
- self.batch_size = batch_size
- input_shape = (self.batch_size, *self.img_shape)
- super().__init__(input_shape, model_path, device)
- YOLOBase.__init__(self, preprocess_cfg=preprocess_cfg,
- nms_agnostic_cfg=nms_agnostic_cfg,
- score_thr=score_thr,
- use_torch=True)
-
- def infer_batch(self, image_batch: np.ndarray) -> List[Dict]:
- """ Batch inference function for batch input image.
-
- Args:
- image_batch (np.ndarray): batch of input image.
- """
-
- tensor_data, height, width, ratio, dwdh = self.preprocess(image_batch)
- self.change_runtime_dimension(input_shape=(len(tensor_data), 3, height, width))
- self.model['binding_addrs']['input'] = int(tensor_data.data_ptr())
- self.model['context'].execute_v2(list(self.model['binding_addrs'].values()))
- dets = self.model['bindings']['dets'].data.cpu()
- classes = self.model['bindings']['labels'].data.cpu()
- boxes = dets[:,:,:4]
- scores = dets[:,:,4]
-
- return self.post_process(boxes, scores, classes, ratio, dwdh)
-
-class MMYOLOv8ONNX(ONNX_Base, YOLOBase):
- def __init__(self,
- preprocess_cfg,
- nms_agnostic_cfg,
- score_thr=0.1,
- img_shape: Tuple[int, int]=(640, 640),
- batch_size: int=32,
- model_path: str="",
- device: str='0',):
- """ YOLOv7 ONNX class for inference, which is based on ONNX_Base and YOLOBase.
- """
- self.img_shape = img_shape
- self.batch_size = batch_size
- input_shape = (self.batch_size, *self.img_shape)
- super().__init__(input_shape, model_path, device)
- YOLOBase.__init__(self,
- preprocess_cfg=preprocess_cfg,
- nms_agnostic_cfg=nms_agnostic_cfg,
- score_thr=score_thr,
- use_torch=False)
-
- def infer_batch(self, image_batch: np.ndarray) -> List[Dict]:
-
- numpy_array_data, height, width, ratio, dwdh = self.preprocess(image_batch)
- numpy_array_data = numpy_array_data.astype(np.float32)
- results = super().infer_batch(numpy_array_data)
- dets, classes = results
- dets = torch.from_numpy(dets)
- classes = torch.from_numpy(classes)
- boxes = dets[:,:,:4]
- scores = dets[:,:,4]
-
- return self.post_process(boxes, scores, classes, ratio, dwdh)
diff --git a/spaces/cymic/VITS-Tokaiteio/models.py b/spaces/cymic/VITS-Tokaiteio/models.py
deleted file mode 100644
index f5acdeb2bedd47897348407c0ae55c9a160da881..0000000000000000000000000000000000000000
--- a/spaces/cymic/VITS-Tokaiteio/models.py
+++ /dev/null
@@ -1,534 +0,0 @@
-import copy
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-import attentions
-import monotonic_align
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from commons import init_weights, get_padding
-
-
-class StochasticDurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
- super().__init__()
- filter_channels = in_channels # it needs to be removed from future version.
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.log_flow = modules.Log()
- self.flows = nn.ModuleList()
- self.flows.append(modules.ElementwiseAffine(2))
- for i in range(n_flows):
- self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.flows.append(modules.Flip())
-
- self.post_pre = nn.Conv1d(1, filter_channels, 1)
- self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- self.post_flows = nn.ModuleList()
- self.post_flows.append(modules.ElementwiseAffine(2))
- for i in range(4):
- self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.post_flows.append(modules.Flip())
-
- self.pre = nn.Conv1d(in_channels, filter_channels, 1)
- self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
-
- def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0):
- x = torch.detach(x)
- x = self.pre(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.convs(x, x_mask)
- x = self.proj(x) * x_mask
-
- if not reverse:
- flows = self.flows
- assert w is not None
-
- logdet_tot_q = 0
- h_w = self.post_pre(w)
- h_w = self.post_convs(h_w, x_mask)
- h_w = self.post_proj(h_w) * x_mask
- e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
- z_q = e_q
- for flow in self.post_flows:
- z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
- logdet_tot_q += logdet_q
- z_u, z1 = torch.split(z_q, [1, 1], 1)
- u = torch.sigmoid(z_u) * x_mask
- z0 = (w - u) * x_mask
- logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2])
- logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q
-
- logdet_tot = 0
- z0, logdet = self.log_flow(z0, x_mask)
- logdet_tot += logdet
- z = torch.cat([z0, z1], 1)
- for flow in flows:
- z, logdet = flow(z, x_mask, g=x, reverse=reverse)
- logdet_tot = logdet_tot + logdet
- nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot
- return nll + logq # [b]
- else:
- flows = list(reversed(self.flows))
- flows = flows[:-2] + [flows[-1]] # remove a useless vflow
- z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
- for flow in flows:
- z = flow(z, x_mask, g=x, reverse=reverse)
- z0, z1 = torch.split(z, [1, 1], 1)
- logw = z0
- return logw
-
-
-class DurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
- super().__init__()
-
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.gin_channels = gin_channels
-
- self.drop = nn.Dropout(p_dropout)
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.norm_1 = modules.LayerNorm(filter_channels)
- self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.norm_2 = modules.LayerNorm(filter_channels)
- self.proj = nn.Conv1d(filter_channels, 1, 1)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, in_channels, 1)
-
- def forward(self, x, x_mask, g=None):
- x = torch.detach(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.norm_1(x)
- x = self.drop(x)
- x = self.conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.norm_2(x)
- x = self.drop(x)
- x = self.proj(x * x_mask)
- return x * x_mask
-
-
-class TextEncoder(nn.Module):
- def __init__(self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout):
- super().__init__()
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
-
- self.emb = nn.Embedding(n_vocab, hidden_channels)
- nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5)
-
- self.encoder = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths):
- x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
-
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return x, m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True))
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
-
-class Generator(torch.nn.Module):
- def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
- resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(weight_norm(
- ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)),
- k, u, padding=(k-u)//2)))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel//(2**(i+1))
- for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i*self.num_kernels+j](x)
- else:
- xs += self.resblocks[i*self.num_kernels+j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2,3,5,7,11]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-
-class SynthesizerTrn(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(self,
- n_vocab,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- n_speakers=0,
- gin_channels=0,
- use_sdp=True,
- **kwargs):
-
- super().__init__()
- self.n_vocab = n_vocab
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.n_speakers = n_speakers
- self.gin_channels = gin_channels
-
- self.use_sdp = use_sdp
-
- self.enc_p = TextEncoder(n_vocab,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
- self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels)
- self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
-
- if use_sdp:
- self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
- else:
- self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels)
-
- if n_speakers > 1:
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
-
- def forward(self, x, x_lengths, y, y_lengths, sid=None):
-
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
-
- with torch.no_grad():
- # negative cross-entropy
- s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t]
- neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s]
- neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s]
- neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4
-
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach()
-
- w = attn.sum(2)
- if self.use_sdp:
- l_length = self.dp(x, x_mask, w, g=g)
- l_length = l_length / torch.sum(x_mask)
- else:
- logw_ = torch.log(w + 1e-6) * x_mask
- logw = self.dp(x, x_mask, g=g)
- l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging
-
- # expand prior
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2)
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2)
-
- z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size)
- o = self.dec(z_slice, g=g)
- return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None):
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- if self.use_sdp:
- logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w)
- else:
- logw = self.dp(x, x_mask, g=g)
- w = torch.exp(logw) * x_mask * length_scale
- w_ceil = torch.ceil(w)
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = commons.generate_path(w_ceil, attn_mask)
-
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
-
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
- z = self.flow(z_p, y_mask, g=g, reverse=True)
- o = self.dec((z * y_mask)[:,:,:max_len], g=g)
- return o, attn, y_mask, (z, z_p, m_p, logs_p)
-
- def voice_conversion(self, y, y_lengths, sid_src, sid_tgt):
- assert self.n_speakers > 0, "n_speakers have to be larger than 0."
- g_src = self.emb_g(sid_src).unsqueeze(-1)
- g_tgt = self.emb_g(sid_tgt).unsqueeze(-1)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src)
- z_p = self.flow(z, y_mask, g=g_src)
- z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True)
- o_hat = self.dec(z_hat * y_mask, g=g_tgt)
- return o_hat, y_mask, (z, z_p, z_hat)
-
diff --git a/spaces/cymic/Waifu_Diffusion_Webui/modules/realesrgan_model.py b/spaces/cymic/Waifu_Diffusion_Webui/modules/realesrgan_model.py
deleted file mode 100644
index 60d9708625b0d09d6637a8486389db9dbf0da5a2..0000000000000000000000000000000000000000
--- a/spaces/cymic/Waifu_Diffusion_Webui/modules/realesrgan_model.py
+++ /dev/null
@@ -1,135 +0,0 @@
-import os
-import sys
-import traceback
-
-import numpy as np
-from PIL import Image
-from basicsr.utils.download_util import load_file_from_url
-from realesrgan import RealESRGANer
-
-from modules.upscaler import Upscaler, UpscalerData
-from modules.paths import models_path
-from modules.shared import cmd_opts, opts
-
-
-class UpscalerRealESRGAN(Upscaler):
- def __init__(self, path):
- self.name = "RealESRGAN"
- self.model_path = os.path.join(models_path, self.name)
- self.user_path = path
- super().__init__()
- try:
- from basicsr.archs.rrdbnet_arch import RRDBNet
- from realesrgan import RealESRGANer
- from realesrgan.archs.srvgg_arch import SRVGGNetCompact
- self.enable = True
- self.scalers = []
- scalers = self.load_models(path)
- for scaler in scalers:
- if scaler.name in opts.realesrgan_enabled_models:
- self.scalers.append(scaler)
-
- except Exception:
- print("Error importing Real-ESRGAN:", file=sys.stderr)
- print(traceback.format_exc(), file=sys.stderr)
- self.enable = False
- self.scalers = []
-
- def do_upscale(self, img, path):
- if not self.enable:
- return img
-
- info = self.load_model(path)
- if not os.path.exists(info.data_path):
- print("Unable to load RealESRGAN model: %s" % info.name)
- return img
-
- upsampler = RealESRGANer(
- scale=info.scale,
- model_path=info.data_path,
- model=info.model(),
- half=not cmd_opts.no_half,
- tile=opts.ESRGAN_tile,
- tile_pad=opts.ESRGAN_tile_overlap,
- )
-
- upsampled = upsampler.enhance(np.array(img), outscale=info.scale)[0]
-
- image = Image.fromarray(upsampled)
- return image
-
- def load_model(self, path):
- try:
- info = None
- for scaler in self.scalers:
- if scaler.data_path == path:
- info = scaler
-
- if info is None:
- print(f"Unable to find model info: {path}")
- return None
-
- model_file = load_file_from_url(url=info.data_path, model_dir=self.model_path, progress=True)
- info.data_path = model_file
- return info
- except Exception as e:
- print(f"Error making Real-ESRGAN models list: {e}", file=sys.stderr)
- print(traceback.format_exc(), file=sys.stderr)
- return None
-
- def load_models(self, _):
- return get_realesrgan_models(self)
-
-
-def get_realesrgan_models(scaler):
- try:
- from basicsr.archs.rrdbnet_arch import RRDBNet
- from realesrgan.archs.srvgg_arch import SRVGGNetCompact
- models = [
- UpscalerData(
- name="R-ESRGAN General 4xV3",
- path="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth",
- scale=4,
- upscaler=scaler,
- model=lambda: SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=32, upscale=4, act_type='prelu')
- ),
- UpscalerData(
- name="R-ESRGAN General WDN 4xV3",
- path="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-wdn-x4v3.pth",
- scale=4,
- upscaler=scaler,
- model=lambda: SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=32, upscale=4, act_type='prelu')
- ),
- UpscalerData(
- name="R-ESRGAN AnimeVideo",
- path="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth",
- scale=4,
- upscaler=scaler,
- model=lambda: SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu')
- ),
- UpscalerData(
- name="R-ESRGAN 4x+",
- path="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth",
- scale=4,
- upscaler=scaler,
- model=lambda: RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4)
- ),
- UpscalerData(
- name="R-ESRGAN 4x+ Anime6B",
- path="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth",
- scale=4,
- upscaler=scaler,
- model=lambda: RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4)
- ),
- UpscalerData(
- name="R-ESRGAN 2x+",
- path="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth",
- scale=2,
- upscaler=scaler,
- model=lambda: RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=2)
- ),
- ]
- return models
- except Exception as e:
- print("Error making Real-ESRGAN models list:", file=sys.stderr)
- print(traceback.format_exc(), file=sys.stderr)
diff --git a/spaces/dawood17/SayBot_Enchancer/CodeFormer/facelib/detection/yolov5face/models/__init__.py b/spaces/dawood17/SayBot_Enchancer/CodeFormer/facelib/detection/yolov5face/models/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/filenames.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/filenames.py
deleted file mode 100644
index d279f89cc82cc280370d09ebdb16cb301f62aa57..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/filenames.py
+++ /dev/null
@@ -1,246 +0,0 @@
-"""
-This module implements the algorithm for converting between a "user name" -
-something that a user can choose arbitrarily inside a font editor - and a file
-name suitable for use in a wide range of operating systems and filesystems.
-
-The `UFO 3 specification `_
-provides an example of an algorithm for such conversion, which avoids illegal
-characters, reserved file names, ambiguity between upper- and lower-case
-characters, and clashes with existing files.
-
-This code was originally copied from
-`ufoLib `_
-by Tal Leming and is copyright (c) 2005-2016, The RoboFab Developers:
-
-- Erik van Blokland
-- Tal Leming
-- Just van Rossum
-"""
-
-
-illegalCharacters = r"\" * + / : < > ? [ \ ] | \0".split(" ")
-illegalCharacters += [chr(i) for i in range(1, 32)]
-illegalCharacters += [chr(0x7F)]
-reservedFileNames = "CON PRN AUX CLOCK$ NUL A:-Z: COM1".lower().split(" ")
-reservedFileNames += "LPT1 LPT2 LPT3 COM2 COM3 COM4".lower().split(" ")
-maxFileNameLength = 255
-
-
-class NameTranslationError(Exception):
- pass
-
-
-def userNameToFileName(userName, existing=[], prefix="", suffix=""):
- """Converts from a user name to a file name.
-
- Takes care to avoid illegal characters, reserved file names, ambiguity between
- upper- and lower-case characters, and clashes with existing files.
-
- Args:
- userName (str): The input file name.
- existing: A case-insensitive list of all existing file names.
- prefix: Prefix to be prepended to the file name.
- suffix: Suffix to be appended to the file name.
-
- Returns:
- A suitable filename.
-
- Raises:
- NameTranslationError: If no suitable name could be generated.
-
- Examples::
-
- >>> userNameToFileName("a") == "a"
- True
- >>> userNameToFileName("A") == "A_"
- True
- >>> userNameToFileName("AE") == "A_E_"
- True
- >>> userNameToFileName("Ae") == "A_e"
- True
- >>> userNameToFileName("ae") == "ae"
- True
- >>> userNameToFileName("aE") == "aE_"
- True
- >>> userNameToFileName("a.alt") == "a.alt"
- True
- >>> userNameToFileName("A.alt") == "A_.alt"
- True
- >>> userNameToFileName("A.Alt") == "A_.A_lt"
- True
- >>> userNameToFileName("A.aLt") == "A_.aL_t"
- True
- >>> userNameToFileName(u"A.alT") == "A_.alT_"
- True
- >>> userNameToFileName("T_H") == "T__H_"
- True
- >>> userNameToFileName("T_h") == "T__h"
- True
- >>> userNameToFileName("t_h") == "t_h"
- True
- >>> userNameToFileName("F_F_I") == "F__F__I_"
- True
- >>> userNameToFileName("f_f_i") == "f_f_i"
- True
- >>> userNameToFileName("Aacute_V.swash") == "A_acute_V_.swash"
- True
- >>> userNameToFileName(".notdef") == "_notdef"
- True
- >>> userNameToFileName("con") == "_con"
- True
- >>> userNameToFileName("CON") == "C_O_N_"
- True
- >>> userNameToFileName("con.alt") == "_con.alt"
- True
- >>> userNameToFileName("alt.con") == "alt._con"
- True
- """
- # the incoming name must be a str
- if not isinstance(userName, str):
- raise ValueError("The value for userName must be a string.")
- # establish the prefix and suffix lengths
- prefixLength = len(prefix)
- suffixLength = len(suffix)
- # replace an initial period with an _
- # if no prefix is to be added
- if not prefix and userName[0] == ".":
- userName = "_" + userName[1:]
- # filter the user name
- filteredUserName = []
- for character in userName:
- # replace illegal characters with _
- if character in illegalCharacters:
- character = "_"
- # add _ to all non-lower characters
- elif character != character.lower():
- character += "_"
- filteredUserName.append(character)
- userName = "".join(filteredUserName)
- # clip to 255
- sliceLength = maxFileNameLength - prefixLength - suffixLength
- userName = userName[:sliceLength]
- # test for illegal files names
- parts = []
- for part in userName.split("."):
- if part.lower() in reservedFileNames:
- part = "_" + part
- parts.append(part)
- userName = ".".join(parts)
- # test for clash
- fullName = prefix + userName + suffix
- if fullName.lower() in existing:
- fullName = handleClash1(userName, existing, prefix, suffix)
- # finished
- return fullName
-
-
-def handleClash1(userName, existing=[], prefix="", suffix=""):
- """
- existing should be a case-insensitive list
- of all existing file names.
-
- >>> prefix = ("0" * 5) + "."
- >>> suffix = "." + ("0" * 10)
- >>> existing = ["a" * 5]
-
- >>> e = list(existing)
- >>> handleClash1(userName="A" * 5, existing=e,
- ... prefix=prefix, suffix=suffix) == (
- ... '00000.AAAAA000000000000001.0000000000')
- True
-
- >>> e = list(existing)
- >>> e.append(prefix + "aaaaa" + "1".zfill(15) + suffix)
- >>> handleClash1(userName="A" * 5, existing=e,
- ... prefix=prefix, suffix=suffix) == (
- ... '00000.AAAAA000000000000002.0000000000')
- True
-
- >>> e = list(existing)
- >>> e.append(prefix + "AAAAA" + "2".zfill(15) + suffix)
- >>> handleClash1(userName="A" * 5, existing=e,
- ... prefix=prefix, suffix=suffix) == (
- ... '00000.AAAAA000000000000001.0000000000')
- True
- """
- # if the prefix length + user name length + suffix length + 15 is at
- # or past the maximum length, silce 15 characters off of the user name
- prefixLength = len(prefix)
- suffixLength = len(suffix)
- if prefixLength + len(userName) + suffixLength + 15 > maxFileNameLength:
- l = prefixLength + len(userName) + suffixLength + 15
- sliceLength = maxFileNameLength - l
- userName = userName[:sliceLength]
- finalName = None
- # try to add numbers to create a unique name
- counter = 1
- while finalName is None:
- name = userName + str(counter).zfill(15)
- fullName = prefix + name + suffix
- if fullName.lower() not in existing:
- finalName = fullName
- break
- else:
- counter += 1
- if counter >= 999999999999999:
- break
- # if there is a clash, go to the next fallback
- if finalName is None:
- finalName = handleClash2(existing, prefix, suffix)
- # finished
- return finalName
-
-
-def handleClash2(existing=[], prefix="", suffix=""):
- """
- existing should be a case-insensitive list
- of all existing file names.
-
- >>> prefix = ("0" * 5) + "."
- >>> suffix = "." + ("0" * 10)
- >>> existing = [prefix + str(i) + suffix for i in range(100)]
-
- >>> e = list(existing)
- >>> handleClash2(existing=e, prefix=prefix, suffix=suffix) == (
- ... '00000.100.0000000000')
- True
-
- >>> e = list(existing)
- >>> e.remove(prefix + "1" + suffix)
- >>> handleClash2(existing=e, prefix=prefix, suffix=suffix) == (
- ... '00000.1.0000000000')
- True
-
- >>> e = list(existing)
- >>> e.remove(prefix + "2" + suffix)
- >>> handleClash2(existing=e, prefix=prefix, suffix=suffix) == (
- ... '00000.2.0000000000')
- True
- """
- # calculate the longest possible string
- maxLength = maxFileNameLength - len(prefix) - len(suffix)
- maxValue = int("9" * maxLength)
- # try to find a number
- finalName = None
- counter = 1
- while finalName is None:
- fullName = prefix + str(counter) + suffix
- if fullName.lower() not in existing:
- finalName = fullName
- break
- else:
- counter += 1
- if counter >= maxValue:
- break
- # raise an error if nothing has been found
- if finalName is None:
- raise NameTranslationError("No unique name could be found.")
- # finished
- return finalName
-
-
-if __name__ == "__main__":
- import doctest
- import sys
-
- sys.exit(doctest.testmod().failed)
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/tfmLib.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/tfmLib.py
deleted file mode 100644
index 673373ffdf4825d4caac4ce5959eb0ee9e11046c..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/tfmLib.py
+++ /dev/null
@@ -1,460 +0,0 @@
-"""Module for reading TFM (TeX Font Metrics) files.
-
-The TFM format is described in the TFtoPL WEB source code, whose typeset form
-can be found on `CTAN `_.
-
- >>> from fontTools.tfmLib import TFM
- >>> tfm = TFM("Tests/tfmLib/data/cmr10.tfm")
- >>>
- >>> # Accessing an attribute gets you metadata.
- >>> tfm.checksum
- 1274110073
- >>> tfm.designsize
- 10.0
- >>> tfm.codingscheme
- 'TeX text'
- >>> tfm.family
- 'CMR'
- >>> tfm.seven_bit_safe_flag
- False
- >>> tfm.face
- 234
- >>> tfm.extraheader
- {}
- >>> tfm.fontdimens
- {'SLANT': 0.0, 'SPACE': 0.33333396911621094, 'STRETCH': 0.16666698455810547, 'SHRINK': 0.11111164093017578, 'XHEIGHT': 0.4305553436279297, 'QUAD': 1.0000028610229492, 'EXTRASPACE': 0.11111164093017578}
- >>> # Accessing a character gets you its metrics.
- >>> # “width” is always available, other metrics are available only when
- >>> # applicable. All values are relative to “designsize”.
- >>> tfm.chars[ord("g")]
- {'width': 0.5000019073486328, 'height': 0.4305553436279297, 'depth': 0.1944446563720703, 'italic': 0.013888359069824219}
- >>> # Kerning and ligature can be accessed as well.
- >>> tfm.kerning[ord("c")]
- {104: -0.02777862548828125, 107: -0.02777862548828125}
- >>> tfm.ligatures[ord("f")]
- {105: ('LIG', 12), 102: ('LIG', 11), 108: ('LIG', 13)}
-"""
-
-from types import SimpleNamespace
-
-from fontTools.misc.sstruct import calcsize, unpack, unpack2
-
-SIZES_FORMAT = """
- >
- lf: h # length of the entire file, in words
- lh: h # length of the header data, in words
- bc: h # smallest character code in the font
- ec: h # largest character code in the font
- nw: h # number of words in the width table
- nh: h # number of words in the height table
- nd: h # number of words in the depth table
- ni: h # number of words in the italic correction table
- nl: h # number of words in the ligature/kern table
- nk: h # number of words in the kern table
- ne: h # number of words in the extensible character table
- np: h # number of font parameter words
-"""
-
-SIZES_SIZE = calcsize(SIZES_FORMAT)
-
-FIXED_FORMAT = "12.20F"
-
-HEADER_FORMAT1 = f"""
- >
- checksum: L
- designsize: {FIXED_FORMAT}
-"""
-
-HEADER_FORMAT2 = f"""
- {HEADER_FORMAT1}
- codingscheme: 40p
-"""
-
-HEADER_FORMAT3 = f"""
- {HEADER_FORMAT2}
- family: 20p
-"""
-
-HEADER_FORMAT4 = f"""
- {HEADER_FORMAT3}
- seven_bit_safe_flag: ?
- ignored: x
- ignored: x
- face: B
-"""
-
-HEADER_SIZE1 = calcsize(HEADER_FORMAT1)
-HEADER_SIZE2 = calcsize(HEADER_FORMAT2)
-HEADER_SIZE3 = calcsize(HEADER_FORMAT3)
-HEADER_SIZE4 = calcsize(HEADER_FORMAT4)
-
-LIG_KERN_COMMAND = """
- >
- skip_byte: B
- next_char: B
- op_byte: B
- remainder: B
-"""
-
-BASE_PARAMS = [
- "SLANT",
- "SPACE",
- "STRETCH",
- "SHRINK",
- "XHEIGHT",
- "QUAD",
- "EXTRASPACE",
-]
-
-MATHSY_PARAMS = [
- "NUM1",
- "NUM2",
- "NUM3",
- "DENOM1",
- "DENOM2",
- "SUP1",
- "SUP2",
- "SUP3",
- "SUB1",
- "SUB2",
- "SUPDROP",
- "SUBDROP",
- "DELIM1",
- "DELIM2",
- "AXISHEIGHT",
-]
-
-MATHEX_PARAMS = [
- "DEFAULTRULETHICKNESS",
- "BIGOPSPACING1",
- "BIGOPSPACING2",
- "BIGOPSPACING3",
- "BIGOPSPACING4",
- "BIGOPSPACING5",
-]
-
-VANILLA = 0
-MATHSY = 1
-MATHEX = 2
-
-UNREACHABLE = 0
-PASSTHROUGH = 1
-ACCESSABLE = 2
-
-NO_TAG = 0
-LIG_TAG = 1
-LIST_TAG = 2
-EXT_TAG = 3
-
-STOP_FLAG = 128
-KERN_FLAG = 128
-
-
-class TFMException(Exception):
- def __init__(self, message):
- super().__init__(message)
-
-
-class TFM:
- def __init__(self, file):
- self._read(file)
-
- def __repr__(self):
- return (
- f""
- )
-
- def _read(self, file):
- if hasattr(file, "read"):
- data = file.read()
- else:
- with open(file, "rb") as fp:
- data = fp.read()
-
- self._data = data
-
- if len(data) < SIZES_SIZE:
- raise TFMException("Too short input file")
-
- sizes = SimpleNamespace()
- unpack2(SIZES_FORMAT, data, sizes)
-
- # Do some file structure sanity checks.
- # TeX and TFtoPL do additional functional checks and might even correct
- # “errors” in the input file, but we instead try to output the file as
- # it is as long as it is parsable, even if the data make no sense.
-
- if sizes.lf < 0:
- raise TFMException("The file claims to have negative or zero length!")
-
- if len(data) < sizes.lf * 4:
- raise TFMException("The file has fewer bytes than it claims!")
-
- for name, length in vars(sizes).items():
- if length < 0:
- raise TFMException("The subfile size: '{name}' is negative!")
-
- if sizes.lh < 2:
- raise TFMException(f"The header length is only {sizes.lh}!")
-
- if sizes.bc > sizes.ec + 1 or sizes.ec > 255:
- raise TFMException(
- f"The character code range {sizes.bc}..{sizes.ec} is illegal!"
- )
-
- if sizes.nw == 0 or sizes.nh == 0 or sizes.nd == 0 or sizes.ni == 0:
- raise TFMException("Incomplete subfiles for character dimensions!")
-
- if sizes.ne > 256:
- raise TFMException(f"There are {ne} extensible recipes!")
-
- if sizes.lf != (
- 6
- + sizes.lh
- + (sizes.ec - sizes.bc + 1)
- + sizes.nw
- + sizes.nh
- + sizes.nd
- + sizes.ni
- + sizes.nl
- + sizes.nk
- + sizes.ne
- + sizes.np
- ):
- raise TFMException("Subfile sizes don’t add up to the stated total")
-
- # Subfile offsets, used in the helper function below. These all are
- # 32-bit word offsets not 8-bit byte offsets.
- char_base = 6 + sizes.lh - sizes.bc
- width_base = char_base + sizes.ec + 1
- height_base = width_base + sizes.nw
- depth_base = height_base + sizes.nh
- italic_base = depth_base + sizes.nd
- lig_kern_base = italic_base + sizes.ni
- kern_base = lig_kern_base + sizes.nl
- exten_base = kern_base + sizes.nk
- param_base = exten_base + sizes.ne
-
- # Helper functions for accessing individual data. If this looks
- # nonidiomatic Python, I blame the effect of reading the literate WEB
- # documentation of TFtoPL.
- def char_info(c):
- return 4 * (char_base + c)
-
- def width_index(c):
- return data[char_info(c)]
-
- def noneexistent(c):
- return c < sizes.bc or c > sizes.ec or width_index(c) == 0
-
- def height_index(c):
- return data[char_info(c) + 1] // 16
-
- def depth_index(c):
- return data[char_info(c) + 1] % 16
-
- def italic_index(c):
- return data[char_info(c) + 2] // 4
-
- def tag(c):
- return data[char_info(c) + 2] % 4
-
- def remainder(c):
- return data[char_info(c) + 3]
-
- def width(c):
- r = 4 * (width_base + width_index(c))
- return read_fixed(r, "v")["v"]
-
- def height(c):
- r = 4 * (height_base + height_index(c))
- return read_fixed(r, "v")["v"]
-
- def depth(c):
- r = 4 * (depth_base + depth_index(c))
- return read_fixed(r, "v")["v"]
-
- def italic(c):
- r = 4 * (italic_base + italic_index(c))
- return read_fixed(r, "v")["v"]
-
- def exten(c):
- return 4 * (exten_base + remainder(c))
-
- def lig_step(i):
- return 4 * (lig_kern_base + i)
-
- def lig_kern_command(i):
- command = SimpleNamespace()
- unpack2(LIG_KERN_COMMAND, data[i:], command)
- return command
-
- def kern(i):
- r = 4 * (kern_base + i)
- return read_fixed(r, "v")["v"]
-
- def param(i):
- return 4 * (param_base + i)
-
- def read_fixed(index, key, obj=None):
- ret = unpack2(f">;{key}:{FIXED_FORMAT}", data[index:], obj)
- return ret[0]
-
- # Set all attributes to empty values regardless of the header size.
- unpack(HEADER_FORMAT4, [0] * HEADER_SIZE4, self)
-
- offset = 24
- length = sizes.lh * 4
- self.extraheader = {}
- if length >= HEADER_SIZE4:
- rest = unpack2(HEADER_FORMAT4, data[offset:], self)[1]
- if self.face < 18:
- s = self.face % 2
- b = self.face // 2
- self.face = "MBL"[b % 3] + "RI"[s] + "RCE"[b // 3]
- for i in range(sizes.lh - HEADER_SIZE4 // 4):
- rest = unpack2(f">;HEADER{i + 18}:l", rest, self.extraheader)[1]
- elif length >= HEADER_SIZE3:
- unpack2(HEADER_FORMAT3, data[offset:], self)
- elif length >= HEADER_SIZE2:
- unpack2(HEADER_FORMAT2, data[offset:], self)
- elif length >= HEADER_SIZE1:
- unpack2(HEADER_FORMAT1, data[offset:], self)
-
- self.fonttype = VANILLA
- scheme = self.codingscheme.upper()
- if scheme.startswith("TEX MATH SY"):
- self.fonttype = MATHSY
- elif scheme.startswith("TEX MATH EX"):
- self.fonttype = MATHEX
-
- self.fontdimens = {}
- for i in range(sizes.np):
- name = f"PARAMETER{i+1}"
- if i <= 6:
- name = BASE_PARAMS[i]
- elif self.fonttype == MATHSY and i <= 21:
- name = MATHSY_PARAMS[i - 7]
- elif self.fonttype == MATHEX and i <= 12:
- name = MATHEX_PARAMS[i - 7]
- read_fixed(param(i), name, self.fontdimens)
-
- lig_kern_map = {}
- self.right_boundary_char = None
- self.left_boundary_char = None
- if sizes.nl > 0:
- cmd = lig_kern_command(lig_step(0))
- if cmd.skip_byte == 255:
- self.right_boundary_char = cmd.next_char
-
- cmd = lig_kern_command(lig_step((sizes.nl - 1)))
- if cmd.skip_byte == 255:
- self.left_boundary_char = 256
- r = 256 * cmd.op_byte + cmd.remainder
- lig_kern_map[self.left_boundary_char] = r
-
- self.chars = {}
- for c in range(sizes.bc, sizes.ec + 1):
- if width_index(c) > 0:
- self.chars[c] = info = {}
- info["width"] = width(c)
- if height_index(c) > 0:
- info["height"] = height(c)
- if depth_index(c) > 0:
- info["depth"] = depth(c)
- if italic_index(c) > 0:
- info["italic"] = italic(c)
- char_tag = tag(c)
- if char_tag == NO_TAG:
- pass
- elif char_tag == LIG_TAG:
- lig_kern_map[c] = remainder(c)
- elif char_tag == LIST_TAG:
- info["nextlarger"] = remainder(c)
- elif char_tag == EXT_TAG:
- info["varchar"] = varchar = {}
- for i in range(4):
- part = data[exten(c) + i]
- if i == 3 or part > 0:
- name = "rep"
- if i == 0:
- name = "top"
- elif i == 1:
- name = "mid"
- elif i == 2:
- name = "bot"
- if noneexistent(part):
- varchar[name] = c
- else:
- varchar[name] = part
-
- self.ligatures = {}
- self.kerning = {}
- for c, i in sorted(lig_kern_map.items()):
- cmd = lig_kern_command(lig_step(i))
- if cmd.skip_byte > STOP_FLAG:
- i = 256 * cmd.op_byte + cmd.remainder
-
- while i < sizes.nl:
- cmd = lig_kern_command(lig_step(i))
- if cmd.skip_byte > STOP_FLAG:
- pass
- else:
- if cmd.op_byte >= KERN_FLAG:
- r = 256 * (cmd.op_byte - KERN_FLAG) + cmd.remainder
- self.kerning.setdefault(c, {})[cmd.next_char] = kern(r)
- else:
- r = cmd.op_byte
- if r == 4 or (r > 7 and r != 11):
- # Ligature step with nonstandard code, we output
- # the code verbatim.
- lig = r
- else:
- lig = ""
- if r % 4 > 1:
- lig += "/"
- lig += "LIG"
- if r % 2 != 0:
- lig += "/"
- while r > 3:
- lig += ">"
- r -= 4
- self.ligatures.setdefault(c, {})[cmd.next_char] = (
- lig,
- cmd.remainder,
- )
-
- if cmd.skip_byte >= STOP_FLAG:
- break
- i += cmd.skip_byte + 1
-
-
-if __name__ == "__main__":
- import sys
-
- tfm = TFM(sys.argv[1])
- print(
- "\n".join(
- x
- for x in [
- f"tfm.checksum={tfm.checksum}",
- f"tfm.designsize={tfm.designsize}",
- f"tfm.codingscheme={tfm.codingscheme}",
- f"tfm.fonttype={tfm.fonttype}",
- f"tfm.family={tfm.family}",
- f"tfm.seven_bit_safe_flag={tfm.seven_bit_safe_flag}",
- f"tfm.face={tfm.face}",
- f"tfm.extraheader={tfm.extraheader}",
- f"tfm.fontdimens={tfm.fontdimens}",
- f"tfm.right_boundary_char={tfm.right_boundary_char}",
- f"tfm.left_boundary_char={tfm.left_boundary_char}",
- f"tfm.kerning={tfm.kerning}",
- f"tfm.ligatures={tfm.ligatures}",
- f"tfm.chars={tfm.chars}",
- ]
- )
- )
- print(tfm)
diff --git a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/arcface_torch/configs/ms1mv3_mbf.py b/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/arcface_torch/configs/ms1mv3_mbf.py
deleted file mode 100644
index b8a00d6305eeda5a94788017afc1cda0d4a4cd2a..0000000000000000000000000000000000000000
--- a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/arcface_torch/configs/ms1mv3_mbf.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from easydict import EasyDict as edict
-
-# make training faster
-# our RAM is 256G
-# mount -t tmpfs -o size=140G tmpfs /train_tmp
-
-config = edict()
-config.loss = "arcface"
-config.network = "mbf"
-config.resume = False
-config.output = None
-config.embedding_size = 512
-config.sample_rate = 1.0
-config.fp16 = True
-config.momentum = 0.9
-config.weight_decay = 2e-4
-config.batch_size = 128
-config.lr = 0.1 # batch size is 512
-
-config.rec = "/train_tmp/ms1m-retinaface-t1"
-config.num_classes = 93431
-config.num_image = 5179510
-config.num_epoch = 30
-config.warmup_epoch = -1
-config.decay_epoch = [10, 20, 25]
-config.val_targets = ["lfw", "cfp_fp", "agedb_30"]
diff --git a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/facerecon_model.py b/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/facerecon_model.py
deleted file mode 100644
index 27705a642e013101c1d624cb0cf7e5955d0614ad..0000000000000000000000000000000000000000
--- a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/facerecon_model.py
+++ /dev/null
@@ -1,220 +0,0 @@
-"""This script defines the face reconstruction model for Deep3DFaceRecon_pytorch
-"""
-
-import numpy as np
-import torch
-from sad_talker.src.face3d.models.base_model import BaseModel
-from sad_talker.src.face3d.models import networks
-from sad_talker.src.face3d.models.bfm import ParametricFaceModel
-from sad_talker.src.face3d.models.losses import perceptual_loss, photo_loss, reg_loss, reflectance_loss, landmark_loss
-from sad_talker.src.face3d.util import util
-from sad_talker.src.face3d.util.nvdiffrast import MeshRenderer
-# from sad_talker.src.face3d.util.preprocess import estimate_norm_torch
-
-import trimesh
-from scipy.io import savemat
-
-class FaceReconModel(BaseModel):
-
- @staticmethod
- def modify_commandline_options(parser, is_train=False):
- """ Configures options specific for CUT model
- """
- # net structure and parameters
- parser.add_argument('--net_recon', type=str, default='resnet50', choices=['resnet18', 'resnet34', 'resnet50'], help='network structure')
- parser.add_argument('--init_path', type=str, default='./checkpoints/init_model/resnet50-0676ba61.pth')
- parser.add_argument('--use_last_fc', type=util.str2bool, nargs='?', const=True, default=False, help='zero initialize the last fc')
- parser.add_argument('--bfm_folder', type=str, default='./checkpoints/BFM_Fitting/')
- parser.add_argument('--bfm_model', type=str, default='BFM_model_front.mat', help='bfm model')
-
- # renderer parameters
- parser.add_argument('--focal', type=float, default=1015.)
- parser.add_argument('--center', type=float, default=112.)
- parser.add_argument('--camera_d', type=float, default=10.)
- parser.add_argument('--z_near', type=float, default=5.)
- parser.add_argument('--z_far', type=float, default=15.)
-
- if is_train:
- # training parameters
- parser.add_argument('--net_recog', type=str, default='r50', choices=['r18', 'r43', 'r50'], help='face recog network structure')
- parser.add_argument('--net_recog_path', type=str, default='checkpoints/recog_model/ms1mv3_arcface_r50_fp16/backbone.pth')
- parser.add_argument('--use_crop_face', type=util.str2bool, nargs='?', const=True, default=False, help='use crop mask for photo loss')
- parser.add_argument('--use_predef_M', type=util.str2bool, nargs='?', const=True, default=False, help='use predefined M for predicted face')
-
-
- # augmentation parameters
- parser.add_argument('--shift_pixs', type=float, default=10., help='shift pixels')
- parser.add_argument('--scale_delta', type=float, default=0.1, help='delta scale factor')
- parser.add_argument('--rot_angle', type=float, default=10., help='rot angles, degree')
-
- # loss weights
- parser.add_argument('--w_feat', type=float, default=0.2, help='weight for feat loss')
- parser.add_argument('--w_color', type=float, default=1.92, help='weight for loss loss')
- parser.add_argument('--w_reg', type=float, default=3.0e-4, help='weight for reg loss')
- parser.add_argument('--w_id', type=float, default=1.0, help='weight for id_reg loss')
- parser.add_argument('--w_exp', type=float, default=0.8, help='weight for exp_reg loss')
- parser.add_argument('--w_tex', type=float, default=1.7e-2, help='weight for tex_reg loss')
- parser.add_argument('--w_gamma', type=float, default=10.0, help='weight for gamma loss')
- parser.add_argument('--w_lm', type=float, default=1.6e-3, help='weight for lm loss')
- parser.add_argument('--w_reflc', type=float, default=5.0, help='weight for reflc loss')
-
- opt, _ = parser.parse_known_args()
- parser.set_defaults(
- focal=1015., center=112., camera_d=10., use_last_fc=False, z_near=5., z_far=15.
- )
- if is_train:
- parser.set_defaults(
- use_crop_face=True, use_predef_M=False
- )
- return parser
-
- def __init__(self, opt):
- """Initialize this model class.
-
- Parameters:
- opt -- training/test options
-
- A few things can be done here.
- - (required) call the initialization function of BaseModel
- - define loss function, visualization images, model names, and optimizers
- """
- BaseModel.__init__(self, opt) # call the initialization method of BaseModel
-
- self.visual_names = ['output_vis']
- self.model_names = ['net_recon']
- self.parallel_names = self.model_names + ['renderer']
-
- self.facemodel = ParametricFaceModel(
- bfm_folder=opt.bfm_folder, camera_distance=opt.camera_d, focal=opt.focal, center=opt.center,
- is_train=self.isTrain, default_name=opt.bfm_model
- )
-
- fov = 2 * np.arctan(opt.center / opt.focal) * 180 / np.pi
- self.renderer = MeshRenderer(
- rasterize_fov=fov, znear=opt.z_near, zfar=opt.z_far, rasterize_size=int(2 * opt.center)
- )
-
- if self.isTrain:
- self.loss_names = ['all', 'feat', 'color', 'lm', 'reg', 'gamma', 'reflc']
-
- self.net_recog = networks.define_net_recog(
- net_recog=opt.net_recog, pretrained_path=opt.net_recog_path
- )
- # loss func name: (compute_%s_loss) % loss_name
- self.compute_feat_loss = perceptual_loss
- self.comupte_color_loss = photo_loss
- self.compute_lm_loss = landmark_loss
- self.compute_reg_loss = reg_loss
- self.compute_reflc_loss = reflectance_loss
-
- self.optimizer = torch.optim.Adam(self.net_recon.parameters(), lr=opt.lr)
- self.optimizers = [self.optimizer]
- self.parallel_names += ['net_recog']
- # Our program will automatically call to define schedulers, load networks, and print networks
-
- def set_input(self, input):
- """Unpack input data from the dataloader and perform necessary pre-processing steps.
-
- Parameters:
- input: a dictionary that contains the data itself and its metadata information.
- """
- self.input_img = input['imgs'].to(self.device)
- self.atten_mask = input['msks'].to(self.device) if 'msks' in input else None
- self.gt_lm = input['lms'].to(self.device) if 'lms' in input else None
- self.trans_m = input['M'].to(self.device) if 'M' in input else None
- self.image_paths = input['im_paths'] if 'im_paths' in input else None
-
- def forward(self, output_coeff, device):
- self.facemodel.to(device)
- self.pred_vertex, self.pred_tex, self.pred_color, self.pred_lm = \
- self.facemodel.compute_for_render(output_coeff)
- self.pred_mask, _, self.pred_face = self.renderer(
- self.pred_vertex, self.facemodel.face_buf, feat=self.pred_color)
-
- self.pred_coeffs_dict = self.facemodel.split_coeff(output_coeff)
-
-
- def compute_losses(self):
- """Calculate losses, gradients, and update network weights; called in every training iteration"""
-
- assert self.net_recog.training == False
- trans_m = self.trans_m
- if not self.opt.use_predef_M:
- trans_m = estimate_norm_torch(self.pred_lm, self.input_img.shape[-2])
-
- pred_feat = self.net_recog(self.pred_face, trans_m)
- gt_feat = self.net_recog(self.input_img, self.trans_m)
- self.loss_feat = self.opt.w_feat * self.compute_feat_loss(pred_feat, gt_feat)
-
- face_mask = self.pred_mask
- if self.opt.use_crop_face:
- face_mask, _, _ = self.renderer(self.pred_vertex, self.facemodel.front_face_buf)
-
- face_mask = face_mask.detach()
- self.loss_color = self.opt.w_color * self.comupte_color_loss(
- self.pred_face, self.input_img, self.atten_mask * face_mask)
-
- loss_reg, loss_gamma = self.compute_reg_loss(self.pred_coeffs_dict, self.opt)
- self.loss_reg = self.opt.w_reg * loss_reg
- self.loss_gamma = self.opt.w_gamma * loss_gamma
-
- self.loss_lm = self.opt.w_lm * self.compute_lm_loss(self.pred_lm, self.gt_lm)
-
- self.loss_reflc = self.opt.w_reflc * self.compute_reflc_loss(self.pred_tex, self.facemodel.skin_mask)
-
- self.loss_all = self.loss_feat + self.loss_color + self.loss_reg + self.loss_gamma \
- + self.loss_lm + self.loss_reflc
-
-
- def optimize_parameters(self, isTrain=True):
- self.forward()
- self.compute_losses()
- """Update network weights; it will be called in every training iteration."""
- if isTrain:
- self.optimizer.zero_grad()
- self.loss_all.backward()
- self.optimizer.step()
-
- def compute_visuals(self):
- with torch.no_grad():
- input_img_numpy = 255. * self.input_img.detach().cpu().permute(0, 2, 3, 1).numpy()
- output_vis = self.pred_face * self.pred_mask + (1 - self.pred_mask) * self.input_img
- output_vis_numpy_raw = 255. * output_vis.detach().cpu().permute(0, 2, 3, 1).numpy()
-
- if self.gt_lm is not None:
- gt_lm_numpy = self.gt_lm.cpu().numpy()
- pred_lm_numpy = self.pred_lm.detach().cpu().numpy()
- output_vis_numpy = util.draw_landmarks(output_vis_numpy_raw, gt_lm_numpy, 'b')
- output_vis_numpy = util.draw_landmarks(output_vis_numpy, pred_lm_numpy, 'r')
-
- output_vis_numpy = np.concatenate((input_img_numpy,
- output_vis_numpy_raw, output_vis_numpy), axis=-2)
- else:
- output_vis_numpy = np.concatenate((input_img_numpy,
- output_vis_numpy_raw), axis=-2)
-
- self.output_vis = torch.tensor(
- output_vis_numpy / 255., dtype=torch.float32
- ).permute(0, 3, 1, 2).to(self.device)
-
- def save_mesh(self, name):
-
- recon_shape = self.pred_vertex # get reconstructed shape
- recon_shape[..., -1] = 10 - recon_shape[..., -1] # from camera space to world space
- recon_shape = recon_shape.cpu().numpy()[0]
- recon_color = self.pred_color
- recon_color = recon_color.cpu().numpy()[0]
- tri = self.facemodel.face_buf.cpu().numpy()
- mesh = trimesh.Trimesh(vertices=recon_shape, faces=tri, vertex_colors=np.clip(255. * recon_color, 0, 255).astype(np.uint8))
- mesh.export(name)
-
- def save_coeff(self,name):
-
- pred_coeffs = {key:self.pred_coeffs_dict[key].cpu().numpy() for key in self.pred_coeffs_dict}
- pred_lm = self.pred_lm.cpu().numpy()
- pred_lm = np.stack([pred_lm[:,:,0],self.input_img.shape[2]-1-pred_lm[:,:,1]],axis=2) # transfer to image coordinate
- pred_coeffs['lm68'] = pred_lm
- savemat(name,pred_coeffs)
-
-
-
diff --git a/spaces/deepwisdom/MetaGPT/metagpt/tools/sd_engine.py b/spaces/deepwisdom/MetaGPT/metagpt/tools/sd_engine.py
deleted file mode 100644
index c33f67a515caf1edfdda8f866109eaa85ff12585..0000000000000000000000000000000000000000
--- a/spaces/deepwisdom/MetaGPT/metagpt/tools/sd_engine.py
+++ /dev/null
@@ -1,136 +0,0 @@
-# -*- coding: utf-8 -*-
-# @Date : 2023/7/19 16:28
-# @Author : stellahong (stellahong@fuzhi.ai)
-# @Desc :
-import asyncio
-import base64
-import io
-import json
-import os
-from os.path import join
-from typing import List
-
-from aiohttp import ClientSession
-from PIL import Image, PngImagePlugin
-
-from metagpt.config import Config
-from metagpt.logs import logger
-
-config = Config()
-
-payload = {
- "prompt": "",
- "negative_prompt": "(easynegative:0.8),black, dark,Low resolution",
- "override_settings": {"sd_model_checkpoint": "galaxytimemachinesGTM_photoV20"},
- "seed": -1,
- "batch_size": 1,
- "n_iter": 1,
- "steps": 20,
- "cfg_scale": 7,
- "width": 512,
- "height": 768,
- "restore_faces": False,
- "tiling": False,
- "do_not_save_samples": False,
- "do_not_save_grid": False,
- "enable_hr": False,
- "hr_scale": 2,
- "hr_upscaler": "Latent",
- "hr_second_pass_steps": 0,
- "hr_resize_x": 0,
- "hr_resize_y": 0,
- "hr_upscale_to_x": 0,
- "hr_upscale_to_y": 0,
- "truncate_x": 0,
- "truncate_y": 0,
- "applied_old_hires_behavior_to": None,
- "eta": None,
- "sampler_index": "DPM++ SDE Karras",
- "alwayson_scripts": {},
-}
-
-default_negative_prompt = "(easynegative:0.8),black, dark,Low resolution"
-
-
-class SDEngine:
- def __init__(self):
- # Initialize the SDEngine with configuration
- self.config = Config()
- self.sd_url = self.config.get("SD_URL")
- self.sd_t2i_url = f"{self.sd_url}{self.config.get('SD_T2I_API')}"
- # Define default payload settings for SD API
- self.payload = payload
- logger.info(self.sd_t2i_url)
-
- def construct_payload(
- self,
- prompt,
- negtive_prompt=default_negative_prompt,
- width=512,
- height=512,
- sd_model="galaxytimemachinesGTM_photoV20",
- ):
- # Configure the payload with provided inputs
- self.payload["prompt"] = prompt
- self.payload["negtive_prompt"] = negtive_prompt
- self.payload["width"] = width
- self.payload["height"] = height
- self.payload["override_settings"]["sd_model_checkpoint"] = sd_model
- logger.info(f"call sd payload is {self.payload}")
- return self.payload
-
- def _save(self, imgs, save_name=""):
- save_dir = CONFIG.get_workspace() / "resources" / "SD_Output"
- if not os.path.exists(save_dir):
- os.makedirs(save_dir, exist_ok=True)
- batch_decode_base64_to_image(imgs, save_dir, save_name=save_name)
-
- async def run_t2i(self, prompts: List):
- # Asynchronously run the SD API for multiple prompts
- session = ClientSession()
- for payload_idx, payload in enumerate(prompts):
- results = await self.run(url=self.sd_t2i_url, payload=payload, session=session)
- self._save(results, save_name=f"output_{payload_idx}")
- await session.close()
-
- async def run(self, url, payload, session):
- # Perform the HTTP POST request to the SD API
- async with session.post(url, json=payload, timeout=600) as rsp:
- data = await rsp.read()
-
- rsp_json = json.loads(data)
- imgs = rsp_json["images"]
- logger.info(f"callback rsp json is {rsp_json.keys()}")
- return imgs
-
- async def run_i2i(self):
- # todo: 添加图生图接口调用
- raise NotImplementedError
-
- async def run_sam(self):
- # todo:添加SAM接口调用
- raise NotImplementedError
-
-
-def decode_base64_to_image(img, save_name):
- image = Image.open(io.BytesIO(base64.b64decode(img.split(",", 1)[0])))
- pnginfo = PngImagePlugin.PngInfo()
- logger.info(save_name)
- image.save(f"{save_name}.png", pnginfo=pnginfo)
- return pnginfo, image
-
-
-def batch_decode_base64_to_image(imgs, save_dir="", save_name=""):
- for idx, _img in enumerate(imgs):
- save_name = join(save_dir, save_name)
- decode_base64_to_image(_img, save_name=save_name)
-
-
-if __name__ == "__main__":
- engine = SDEngine()
- prompt = "pixel style, game design, a game interface should be minimalistic and intuitive with the score and high score displayed at the top. The snake and its food should be easily distinguishable. The game should have a simple color scheme, with a contrasting color for the snake and its food. Complete interface boundary"
-
- engine.construct_payload(prompt)
-
- event_loop = asyncio.get_event_loop()
- event_loop.run_until_complete(engine.run_t2i(prompt))
diff --git a/spaces/diacanFperku/AutoGPT/Abaqus 6.5 Torrent 2021.md b/spaces/diacanFperku/AutoGPT/Abaqus 6.5 Torrent 2021.md
deleted file mode 100644
index 00a4bb2425901e13f0d843a093024dd0c90dcdc8..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Abaqus 6.5 Torrent 2021.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
last summer, i installed abaqus 6.0-3 in ubuntu 12.04 and set up a 2d simulation with parameters as in the reference manual (same build and material as in the manual). when i start the simulation and stop it at a given time, the crack is still propagating. the crack does not stop nor does the simulation stop. how do i stop the simulation?
hello, i'm an engineer who is struggling to learn abaqus/finite element from the abaqus 6.5 manual. i really can't understand the concepts in there. i love computer science so i am used to programming. my question is: where do i start so i can learn how to enter the abaqus model to do my research and analyze my problems? what do i need to know what kind of topics i need to study before i can learn programming abaqus to run my models. actually, i would like to get your advice. please know that i am just a beginner so, please, be patient with me. i would love to learn from someone who has experience. thank you in advance!
-
the abaqus software is a product of dassault systemes simulia corp., snc reiner. to learn more about abaqus, please contact the abaqus customer support. 07/03/2014. our development team is dedicated to making sure. welcome to abaqus, a comprehensive fea.. abaqus 6.5 documentation > user's guide. description of abaqus eos / ht / pest models. chapter 6. methods for numerical simulation of composites. abaqus tutorial for combing commercial and third party products. comaprison of multipas & abaqus-addfem for simulation of composites description of the mission and outcomes of the project design and implementation of a software framework called multipas (multiscale particulate automata simulation) for . buy abaqus in dubai - abaqus 6.5.pdf buy abaqus in dubai - abaqus 6.rar buy abaqus in dubai - abaqus 6.txt. download abaqus 6.5 torrent. get free software to download for free. collection of the best software. full list of download free software. see the list of all free download abaqus 6. softqare price see full description. download the full version free directly from the author's web-site or from rapidshare. 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Assassin-Creed-Brotherhood-Serial-Number-Activation-29.md b/spaces/diacanFperku/AutoGPT/Assassin-Creed-Brotherhood-Serial-Number-Activation-29.md
deleted file mode 100644
index 92c32377e8e93f8c1608523589ba96d8707e33b2..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Assassin-Creed-Brotherhood-Serial-Number-Activation-29.md
+++ /dev/null
@@ -1,60 +0,0 @@
-Assassin Creed Brotherhood Serial Number Activation 29
-
-
-
-Download File >> [https://maudaracte.blogspot.com/?file=2tvJcQ](https://maudaracte.blogspot.com/?file=2tvJcQ)
-
-
-
-
-
-
-
-
-
-Here is a possible title and article with SEO optimization and HTML formatting for the keyword "Assassin Creed Brotherhood Serial Number Activation 29":
-
-How to Activate Assassin Creed Brotherhood Serial Number 29
-
-If you are looking for a way to activate Assassin Creed Brotherhood serial number 29, you are not alone. Many gamers have faced this issue when trying to play this popular action-adventure game on their PC. In this article, we will show you how to solve this problem and enjoy the game without any hassle.
-
-What is Assassin Creed Brotherhood Serial Number 29?
-
-Assassin Creed Brotherhood is the third installment in the Assassin Creed series, developed by Ubisoft Montreal and released in 2010. The game follows the adventures of Ezio Auditore da Firenze, a master assassin who fights against the Templars in Renaissance Italy.
-
-To play the game on PC, you need to have a valid serial key that you can get when you buy the original version of the game. However, some users have reported that they get an error message saying that their serial key has already been accessed and activated when they try to enter it in the Ubisoft launcher. This is what we call Assassin Creed Brotherhood serial number 29.
-
-Why does Assassin Creed Brotherhood Serial Number 29 happen?
-
-There are several possible reasons why you may encounter Assassin Creed Brotherhood serial number 29. Some of them are:
-
-
-You have bought a pirated or cracked version of the game that has an invalid or used serial key.
-You have entered the serial key incorrectly or mistyped it.
-You have installed the game on more than one PC using the same serial key.
-You have changed your hardware configuration or updated your operating system after installing the game.
-You have a corrupted or outdated Ubisoft launcher that prevents you from activating the game.
-
-
-How to fix Assassin Creed Brotherhood Serial Number 29?
-
-Depending on the cause of your problem, there are different solutions that you can try to fix Assassin Creed Brotherhood serial number 29. Here are some of them:
-
-
-Make sure you have bought a legitimate copy of the game from a trusted source and that you have a valid serial key. If you have bought a pirated or cracked version of the game, you may need to buy a new one or contact Ubisoft support for assistance.
-Check if you have entered the serial key correctly and that there are no spaces or extra characters. You can also copy and paste the serial key from your email confirmation or receipt instead of typing it manually.
-If you have installed the game on more than one PC using the same serial key, you may need to uninstall it from one of them or buy another serial key. Each serial key can only be used on one PC at a time.
-If you have changed your hardware configuration or updated your operating system after installing the game, you may need to reactivate it using your serial key. You can do this by launching the Ubisoft launcher and clicking on "Activate a product" under "Games".
-If you have a corrupted or outdated Ubisoft launcher, you may need to reinstall it or update it to the latest version. You can download the Ubisoft launcher from here. After installing or updating it, restart your PC and try to activate the game again.
-
-
-Conclusion
-
-Assassin Creed Brotherhood serial number 29 is a common error that many PC gamers face when trying to play this awesome game. However, it is not impossible to fix it. By following the steps above, you should be able to activate your game and enjoy it without any trouble.
-
-If none of these solutions work for you, you may need to contact Ubisoft support for further help. You can reach them through this link. They will be happy to assist you and resolve your issue as soon as possible.
-
-We hope this article has been helpful for you and that you have learned how to fix Assassin Creed Brotherhood serial number 29. If you have any questions or comments, feel free to leave them below. We would love to hear from you. dfd1c89656
-
-
-
diff --git a/spaces/diagaiwei/ir_chinese_medqa/colbert/index.py b/spaces/diagaiwei/ir_chinese_medqa/colbert/index.py
deleted file mode 100644
index 60d01a67c16e46d8e36718ebe90a723b7d541e1a..0000000000000000000000000000000000000000
--- a/spaces/diagaiwei/ir_chinese_medqa/colbert/index.py
+++ /dev/null
@@ -1,17 +0,0 @@
-
-
-# TODO: This is the loaded index, underneath a searcher.
-
-
-"""
-## Operations:
-
-index = Index(index='/path/to/index')
-index.load_to_memory()
-
-batch_of_pids = [2324,32432,98743,23432]
-index.lookup(batch_of_pids, device='cuda:0') -> (N, doc_maxlen, dim)
-
-index.iterate_over_parts()
-
-"""
diff --git a/spaces/digitalxingtong/Luzao-Bert-Vits2/modules.py b/spaces/digitalxingtong/Luzao-Bert-Vits2/modules.py
deleted file mode 100644
index 92e0f32a51c472bfd1659a50a95a95d195281d2b..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Luzao-Bert-Vits2/modules.py
+++ /dev/null
@@ -1,452 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-from attentions import Encoder
-
-LRELU_SLOPE = 0.1
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
-class TransformerCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- n_layers,
- n_heads,
- p_dropout=0,
- filter_channels=0,
- mean_only=False,
- wn_sharing_parameter=None,
- gin_channels = 0
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = Encoder(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = gin_channels) if wn_sharing_parameter is None else wn_sharing_parameter
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/dineshreddy/WALT/mmdet/datasets/custom.py b/spaces/dineshreddy/WALT/mmdet/datasets/custom.py
deleted file mode 100644
index 356f01ede6456312920b6fe8fa618258d8898075..0000000000000000000000000000000000000000
--- a/spaces/dineshreddy/WALT/mmdet/datasets/custom.py
+++ /dev/null
@@ -1,334 +0,0 @@
-import os.path as osp
-import warnings
-from collections import OrderedDict
-
-import mmcv
-import numpy as np
-from mmcv.utils import print_log
-from torch.utils.data import Dataset
-
-from mmdet.core import eval_map, eval_recalls
-from .builder import DATASETS
-from .pipelines import Compose
-
-
-@DATASETS.register_module()
-class CustomDataset(Dataset):
- """Custom dataset for detection.
-
- The annotation format is shown as follows. The `ann` field is optional for
- testing.
-
- .. code-block:: none
-
- [
- {
- 'filename': 'a.jpg',
- 'width': 1280,
- 'height': 720,
- 'ann': {
- 'bboxes': (n, 4) in (x1, y1, x2, y2) order.
- 'labels': (n, ),
- 'bboxes_ignore': (k, 4), (optional field)
- 'labels_ignore': (k, 4) (optional field)
- }
- },
- ...
- ]
-
- Args:
- ann_file (str): Annotation file path.
- pipeline (list[dict]): Processing pipeline.
- classes (str | Sequence[str], optional): Specify classes to load.
- If is None, ``cls.CLASSES`` will be used. Default: None.
- data_root (str, optional): Data root for ``ann_file``,
- ``img_prefix``, ``seg_prefix``, ``proposal_file`` if specified.
- test_mode (bool, optional): If set True, annotation will not be loaded.
- filter_empty_gt (bool, optional): If set true, images without bounding
- boxes of the dataset's classes will be filtered out. This option
- only works when `test_mode=False`, i.e., we never filter images
- during tests.
- """
-
- CLASSES = None
-
- def __init__(self,
- ann_file,
- pipeline,
- classes=None,
- data_root=None,
- img_prefix='',
- seg_prefix=None,
- proposal_file=None,
- test_mode=False,
- filter_empty_gt=True):
- self.ann_file = ann_file
- self.data_root = data_root
- self.img_prefix = img_prefix
- self.seg_prefix = seg_prefix
- self.proposal_file = proposal_file
- self.test_mode = test_mode
- self.filter_empty_gt = filter_empty_gt
- self.CLASSES = self.get_classes(classes)
-
- # join paths if data_root is specified
- if self.data_root is not None:
- if not osp.isabs(self.ann_file):
- self.ann_file = osp.join(self.data_root, self.ann_file)
- if not (self.img_prefix is None or osp.isabs(self.img_prefix)):
- self.img_prefix = osp.join(self.data_root, self.img_prefix)
- if not (self.seg_prefix is None or osp.isabs(self.seg_prefix)):
- self.seg_prefix = osp.join(self.data_root, self.seg_prefix)
- if not (self.proposal_file is None
- or osp.isabs(self.proposal_file)):
- self.proposal_file = osp.join(self.data_root,
- self.proposal_file)
- # load annotations (and proposals)
- self.data_infos = self.load_annotations(self.ann_file)
-
- if self.proposal_file is not None:
- self.proposals = self.load_proposals(self.proposal_file)
- else:
- self.proposals = None
-
- # filter images too small and containing no annotations
- if not test_mode:
- valid_inds = self._filter_imgs()
- self.data_infos = [self.data_infos[i] for i in valid_inds]
- if self.proposals is not None:
- self.proposals = [self.proposals[i] for i in valid_inds]
- # set group flag for the sampler
- self._set_group_flag()
-
- # processing pipeline
- self.pipeline = Compose(pipeline)
-
- def __len__(self):
- """Total number of samples of data."""
- return len(self.data_infos)
-
- def load_annotations(self, ann_file):
- """Load annotation from annotation file."""
- return mmcv.load(ann_file)
-
- def load_proposals(self, proposal_file):
- """Load proposal from proposal file."""
- return mmcv.load(proposal_file)
-
- def get_ann_info(self, idx):
- """Get annotation by index.
-
- Args:
- idx (int): Index of data.
-
- Returns:
- dict: Annotation info of specified index.
- """
-
- return self.data_infos[idx]['ann']
-
- def get_cat_ids(self, idx):
- """Get category ids by index.
-
- Args:
- idx (int): Index of data.
-
- Returns:
- list[int]: All categories in the image of specified index.
- """
-
- return self.data_infos[idx]['ann']['labels'].astype(np.int).tolist()
-
- def pre_pipeline(self, results):
- """Prepare results dict for pipeline."""
- results['img_prefix'] = self.img_prefix
- results['seg_prefix'] = self.seg_prefix
- results['proposal_file'] = self.proposal_file
- results['bbox_fields'] = []
- results['mask_fields'] = []
- results['seg_fields'] = []
-
- def _filter_imgs(self, min_size=32):
- """Filter images too small."""
- if self.filter_empty_gt:
- warnings.warn(
- 'CustomDataset does not support filtering empty gt images.')
- valid_inds = []
- for i, img_info in enumerate(self.data_infos):
- if min(img_info['width'], img_info['height']) >= min_size:
- valid_inds.append(i)
- return valid_inds
-
- def _set_group_flag(self):
- """Set flag according to image aspect ratio.
-
- Images with aspect ratio greater than 1 will be set as group 1,
- otherwise group 0.
- """
- self.flag = np.zeros(len(self), dtype=np.uint8)
- for i in range(len(self)):
- img_info = self.data_infos[i]
- if img_info['width'] / img_info['height'] > 1:
- self.flag[i] = 1
-
- def _rand_another(self, idx):
- """Get another random index from the same group as the given index."""
- pool = np.where(self.flag == self.flag[idx])[0]
- return np.random.choice(pool)
-
- def __getitem__(self, idx):
- """Get training/test data after pipeline.
-
- Args:
- idx (int): Index of data.
-
- Returns:
- dict: Training/test data (with annotation if `test_mode` is set \
- True).
- """
-
- if self.test_mode:
- while 1:
- try:
- return self.prepare_test_img(idx)
- except:
- idx = idx+1
- #return self.prepare_test_img(idx+1)
-
- #return self.prepare_test_img(idx)
- while True:
- try:
- data = self.prepare_train_img(idx)
- except:
- data = self.prepare_train_img(idx-1)
-
- if data is None:
- idx = self._rand_another(idx)
- continue
- return data
-
- def prepare_train_img(self, idx):
- """Get training data and annotations after pipeline.
-
- Args:
- idx (int): Index of data.
-
- Returns:
- dict: Training data and annotation after pipeline with new keys \
- introduced by pipeline.
- """
-
- img_info = self.data_infos[idx]
- ann_info = self.get_ann_info(idx)
- results = dict(img_info=img_info, ann_info=ann_info)
- if self.proposals is not None:
- results['proposals'] = self.proposals[idx]
- self.pre_pipeline(results)
- return self.pipeline(results)
-
- def prepare_test_img(self, idx):
- """Get testing data after pipeline.
-
- Args:
- idx (int): Index of data.
-
- Returns:
- dict: Testing data after pipeline with new keys introduced by \
- pipeline.
- """
-
- img_info = self.data_infos[idx]
- results = dict(img_info=img_info)
- if self.proposals is not None:
- results['proposals'] = self.proposals[idx]
- self.pre_pipeline(results)
- return self.pipeline(results)
-
- @classmethod
- def get_classes(cls, classes=None):
- """Get class names of current dataset.
-
- Args:
- classes (Sequence[str] | str | None): If classes is None, use
- default CLASSES defined by builtin dataset. If classes is a
- string, take it as a file name. The file contains the name of
- classes where each line contains one class name. If classes is
- a tuple or list, override the CLASSES defined by the dataset.
-
- Returns:
- tuple[str] or list[str]: Names of categories of the dataset.
- """
- if classes is None:
- return cls.CLASSES
-
- if isinstance(classes, str):
- # take it as a file path
- class_names = mmcv.list_from_file(classes)
- elif isinstance(classes, (tuple, list)):
- class_names = classes
- else:
- raise ValueError(f'Unsupported type {type(classes)} of classes.')
-
- return class_names
-
- def format_results(self, results, **kwargs):
- """Place holder to format result to dataset specific output."""
-
- def evaluate(self,
- results,
- metric='mAP',
- logger=None,
- proposal_nums=(100, 300, 1000),
- iou_thr=0.5,
- scale_ranges=None):
- """Evaluate the dataset.
-
- Args:
- results (list): Testing results of the dataset.
- metric (str | list[str]): Metrics to be evaluated.
- logger (logging.Logger | None | str): Logger used for printing
- related information during evaluation. Default: None.
- proposal_nums (Sequence[int]): Proposal number used for evaluating
- recalls, such as recall@100, recall@1000.
- Default: (100, 300, 1000).
- iou_thr (float | list[float]): IoU threshold. Default: 0.5.
- scale_ranges (list[tuple] | None): Scale ranges for evaluating mAP.
- Default: None.
- """
-
- if not isinstance(metric, str):
- assert len(metric) == 1
- metric = metric[0]
- allowed_metrics = ['mAP', 'recall']
- if metric not in allowed_metrics:
- raise KeyError(f'metric {metric} is not supported')
- annotations = [self.get_ann_info(i) for i in range(len(self))]
- eval_results = OrderedDict()
- iou_thrs = [iou_thr] if isinstance(iou_thr, float) else iou_thr
- if metric == 'mAP':
- assert isinstance(iou_thrs, list)
- mean_aps = []
- for iou_thr in iou_thrs:
- print_log(f'\n{"-" * 15}iou_thr: {iou_thr}{"-" * 15}')
- mean_ap, _ = eval_map(
- results,
- annotations,
- scale_ranges=scale_ranges,
- iou_thr=iou_thr,
- dataset=self.CLASSES,
- logger=logger)
- mean_aps.append(mean_ap)
- eval_results[f'AP{int(iou_thr * 100):02d}'] = round(mean_ap, 3)
- eval_results['mAP'] = sum(mean_aps) / len(mean_aps)
- elif metric == 'recall':
- gt_bboxes = [ann['bboxes'] for ann in annotations]
- recalls = eval_recalls(
- gt_bboxes, results, proposal_nums, iou_thr, logger=logger)
- for i, num in enumerate(proposal_nums):
- for j, iou in enumerate(iou_thrs):
- eval_results[f'recall@{num}@{iou}'] = recalls[i, j]
- if recalls.shape[1] > 1:
- ar = recalls.mean(axis=1)
- for i, num in enumerate(proposal_nums):
- eval_results[f'AR@{num}'] = ar[i]
- return eval_results
diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_datasets/synthtext.py b/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_datasets/synthtext.py
deleted file mode 100644
index fb9a44b3422dae5a9788d39b0901335dfc6076a9..0000000000000000000000000000000000000000
--- a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_datasets/synthtext.py
+++ /dev/null
@@ -1,18 +0,0 @@
-dataset_type = 'TextDetDataset'
-data_root = 'data/synthtext'
-
-train = dict(
- type=dataset_type,
- ann_file=f'{data_root}/instances_training.lmdb',
- loader=dict(
- type='AnnFileLoader',
- repeat=1,
- file_format='lmdb',
- parser=dict(
- type='LineJsonParser',
- keys=['file_name', 'height', 'width', 'annotations'])),
- img_prefix=f'{data_root}/imgs',
- pipeline=None)
-
-train_list = [train]
-test_list = [train]
diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/textdet/psenet/psenet_r50_fpnf_600e_icdar2015.py b/spaces/dinhminh20521597/OCR_DEMO/configs/textdet/psenet/psenet_r50_fpnf_600e_icdar2015.py
deleted file mode 100644
index fbaacc19b19f6f8284eb65c7d2d2aa95e8051427..0000000000000000000000000000000000000000
--- a/spaces/dinhminh20521597/OCR_DEMO/configs/textdet/psenet/psenet_r50_fpnf_600e_icdar2015.py
+++ /dev/null
@@ -1,35 +0,0 @@
-_base_ = [
- '../../_base_/default_runtime.py',
- '../../_base_/schedules/schedule_adam_step_600e.py',
- '../../_base_/det_models/psenet_r50_fpnf.py',
- '../../_base_/det_datasets/icdar2015.py',
- '../../_base_/det_pipelines/psenet_pipeline.py'
-]
-
-model = {{_base_.model_quad}}
-
-train_list = {{_base_.train_list}}
-test_list = {{_base_.test_list}}
-
-train_pipeline = {{_base_.train_pipeline}}
-test_pipeline_icdar2015 = {{_base_.test_pipeline_icdar2015}}
-
-data = dict(
- samples_per_gpu=8,
- workers_per_gpu=2,
- val_dataloader=dict(samples_per_gpu=1),
- test_dataloader=dict(samples_per_gpu=1),
- train=dict(
- type='UniformConcatDataset',
- datasets=train_list,
- pipeline=train_pipeline),
- val=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline_icdar2015),
- test=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline_icdar2015))
-
-evaluation = dict(interval=10, metric='hmean-iou')
diff --git a/spaces/diy2023/databricks-dolly-v1-6b/app.py b/spaces/diy2023/databricks-dolly-v1-6b/app.py
deleted file mode 100644
index 671cdd19c85ad2351038f5fffc40c71a5657b4c8..0000000000000000000000000000000000000000
--- a/spaces/diy2023/databricks-dolly-v1-6b/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/databricks/dolly-v1-6b").launch()
\ No newline at end of file
diff --git a/spaces/docs-demos/bart-large-mnli/README.md b/spaces/docs-demos/bart-large-mnli/README.md
deleted file mode 100644
index 98a3cf74d10c26dc43346681590dea5655c4e12a..0000000000000000000000000000000000000000
--- a/spaces/docs-demos/bart-large-mnli/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: BART
-emoji: 🐠
-colorFrom: indigo
-colorTo: red
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/doevent/animegan-v2-for-videos/README.md b/spaces/doevent/animegan-v2-for-videos/README.md
deleted file mode 100644
index 9970b8f305fbf1c35599ab14b46236dc543ba015..0000000000000000000000000000000000000000
--- a/spaces/doevent/animegan-v2-for-videos/README.md
+++ /dev/null
@@ -1,20 +0,0 @@
----
-title: AnimeGANv2 On Videos
-emoji: 🔥
-colorFrom: blue
-colorTo: red
-sdk: gradio
-sdk_version: 3.0.9
-app_file: app.py
-pinned: false
----
-
-# AnimeGAN-v2 For Videos
-
-[](https://huggingface.co/spaces/nateraw/animegan-v2-for-videos)
-
-Apply AnimeGAN-v2 across frames of a video
-
----
-
-Autogenerated using [this template](https://github.com/nateraw/spaces-template)
\ No newline at end of file
diff --git a/spaces/doluvor/faster-whisper-webui/app-local.py b/spaces/doluvor/faster-whisper-webui/app-local.py
deleted file mode 100644
index c7717d096ca5f95177f0dba03cd62ca729bae9f3..0000000000000000000000000000000000000000
--- a/spaces/doluvor/faster-whisper-webui/app-local.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Run the app with no audio file restrictions
-from app import create_ui
-from src.config import ApplicationConfig
-
-create_ui(ApplicationConfig.create_default(input_audio_max_duration=-1))
\ No newline at end of file
diff --git a/spaces/dovanquyet/PsyPlus/README.md b/spaces/dovanquyet/PsyPlus/README.md
deleted file mode 100644
index 97b097702abb6509467c39fd3cc7544965e414e2..0000000000000000000000000000000000000000
--- a/spaces/dovanquyet/PsyPlus/README.md
+++ /dev/null
@@ -1,25 +0,0 @@
----
-title: PsyPlus
-emoji: 🤖
-colorFrom: pink
-colorTo: blue
-sdk: gradio
-sdk_version: 3.10.0
-app_file: app.py
-pinned: false
-license: gpl-3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
-
-For more information about this product, please visit this notion [page](https://www.notion.so/AI-Consulting-Design-Scheme-0a9c5288820d4fec98ecc7cc1e84be51)) (you need to have permission to access this page)
-
-# Notes
-
-### 2022/12/20
-
-- DONE turning the chatbot to session varible so that different sessions will show different conversation
-- Chat flow will trigger euc 200 when detect a negative emotion with prob > threshold. Thus, only euc 100 and free chat consist of chat loop, while euc 200 will pop up sometimes. I set the trigger to NOT be regularly (currently one trigger once during the conversation), because trigger to much will bother users
-- Already fix the problem with dialog model. Now it's configured as the same as what it should be. Of course, that does not guarantee of good response
-- TODO is written in the main file already
-- Successfully convert plain euc 100 and 200 to chat flow
\ No newline at end of file
diff --git a/spaces/dragonSwing/annotate-anything/GroundingDINO/groundingdino/models/__init__.py b/spaces/dragonSwing/annotate-anything/GroundingDINO/groundingdino/models/__init__.py
deleted file mode 100644
index e3413961d1d184b99835eb1e919b052d70298bc6..0000000000000000000000000000000000000000
--- a/spaces/dragonSwing/annotate-anything/GroundingDINO/groundingdino/models/__init__.py
+++ /dev/null
@@ -1,18 +0,0 @@
-# ------------------------------------------------------------------------
-# Grounding DINO
-# url: https://github.com/IDEA-Research/GroundingDINO
-# Copyright (c) 2023 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-from .GroundingDINO import build_groundingdino
-
-
-def build_model(args):
- # we use register to maintain models from catdet6 on.
- from .registry import MODULE_BUILD_FUNCS
-
- assert args.modelname in MODULE_BUILD_FUNCS._module_dict
- build_func = MODULE_BUILD_FUNCS.get(args.modelname)
- model = build_func(args)
- return model
diff --git a/spaces/ds520/bingo/src/components/settings.tsx b/spaces/ds520/bingo/src/components/settings.tsx
deleted file mode 100644
index 45ba6044ff9cbe584f62292a49ea2ace9acc1f48..0000000000000000000000000000000000000000
--- a/spaces/ds520/bingo/src/components/settings.tsx
+++ /dev/null
@@ -1,157 +0,0 @@
-import { useEffect, useState } from 'react'
-import { useAtom } from 'jotai'
-import { Switch } from '@headlessui/react'
-import { toast } from 'react-hot-toast'
-import { hashAtom, voiceAtom } from '@/state'
-import {
- Dialog,
- DialogContent,
- DialogDescription,
- DialogFooter,
- DialogHeader,
- DialogTitle
-} from '@/components/ui/dialog'
-import { Button } from './ui/button'
-import { Input } from './ui/input'
-import { ChunkKeys, parseCookies, extraCurlFromCookie, encodeHeadersToCookie, getCookie, setCookie } from '@/lib/utils'
-import { ExternalLink } from './external-link'
-import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard'
-
-
-export function Settings() {
- const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 })
- const [loc, setLoc] = useAtom(hashAtom)
- const [curlValue, setCurlValue] = useState(extraCurlFromCookie(parseCookies(document.cookie, ChunkKeys)))
- const [imageOnly, setImageOnly] = useState(getCookie('IMAGE_ONLY') !== '0')
- const [enableTTS, setEnableTTS] = useAtom(voiceAtom)
-
- useEffect(() => {
- if (isCopied) {
- toast.success('复制成功')
- }
- }, [isCopied])
-
- if (loc === 'settings') {
- return (
-
- )
- } else if (loc === 'voice') {
- return (
-
- )
- }
- return null
-}
diff --git a/spaces/dwolfe66/text-generation-webui-space/README.md b/spaces/dwolfe66/text-generation-webui-space/README.md
deleted file mode 100644
index 527e068e61d6f694a0c92c6cce7724137bbed79d..0000000000000000000000000000000000000000
--- a/spaces/dwolfe66/text-generation-webui-space/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Text Generation Webui Space
-emoji: 🏃
-colorFrom: yellow
-colorTo: purple
-sdk: gradio
-sdk_version: 3.20.1
-app_file: run.py
-pinned: false
-license: mit
-duplicated_from: antonovmaxim/text-generation-webui-space
----
-
-Check out this repo https://github.com/oobabooga/text-generation-webui
diff --git a/spaces/dylanebert/FarmingGame/Build/Build.loader.js b/spaces/dylanebert/FarmingGame/Build/Build.loader.js
deleted file mode 100644
index 624c5f8442f0ea1722cf63eb8ea80823e2d8e2ef..0000000000000000000000000000000000000000
--- a/spaces/dylanebert/FarmingGame/Build/Build.loader.js
+++ /dev/null
@@ -1 +0,0 @@
-function createUnityInstance(e,t,r){function n(e,r){if(!n.aborted&&t.showBanner)return"error"==r&&(n.aborted=!0),t.showBanner(e,r);switch(r){case"error":console.error(e);break;case"warning":console.warn(e);break;default:console.log(e)}}function o(e){var t=e.reason||e.error,r=t?t.toString():e.message||e.reason||"",n=t&&t.stack?t.stack.toString():"";if(n.startsWith(r)&&(n=n.substring(r.length)),r+="\n"+n.trim(),r&&c.stackTraceRegExp&&c.stackTraceRegExp.test(r)){var o=e.filename||t&&(t.fileName||t.sourceURL)||"",a=e.lineno||t&&(t.lineNumber||t.line)||0;s(r,o,a)}}function a(e){e.preventDefault()}function s(e,t,r){if(e.indexOf("fullscreen error")==-1){if(c.startupErrorHandler)return void c.startupErrorHandler(e,t,r);if(!(c.errorHandler&&c.errorHandler(e,t,r)||(console.log("Invoking error handler due to\n"+e),"function"==typeof dump&&dump("Invoking error handler due to\n"+e),s.didShowErrorMessage))){var e="An error occurred running the Unity content on this page. See your browser JavaScript console for more info. The error was:\n"+e;e.indexOf("DISABLE_EXCEPTION_CATCHING")!=-1?e="An exception has occurred, but exception handling has been disabled in this build. If you are the developer of this content, enable exceptions in your project WebGL player settings to be able to catch the exception or see the stack trace.":e.indexOf("Cannot enlarge memory arrays")!=-1?e="Out of memory. If you are the developer of this content, try allocating more memory to your WebGL build in the WebGL player settings.":e.indexOf("Invalid array buffer length")==-1&&e.indexOf("Invalid typed array length")==-1&&e.indexOf("out of memory")==-1&&e.indexOf("could not allocate memory")==-1||(e="The browser could not allocate enough memory for the WebGL content. If you are the developer of this content, try allocating less memory to your WebGL build in the WebGL player settings."),alert(e),s.didShowErrorMessage=!0}}}function i(e,t){if("symbolsUrl"!=e){var n=c.downloadProgress[e];n||(n=c.downloadProgress[e]={started:!1,finished:!1,lengthComputable:!1,total:0,loaded:0}),"object"!=typeof t||"progress"!=t.type&&"load"!=t.type||(n.started||(n.started=!0,n.lengthComputable=t.lengthComputable),n.total=t.total,n.loaded=t.loaded,"load"==t.type&&(n.finished=!0));var o=0,a=0,s=0,i=0,d=0;for(var e in c.downloadProgress){var n=c.downloadProgress[e];if(!n.started)return 0;s++,n.lengthComputable?(o+=n.loaded,a+=n.total,i++):n.finished||d++}var u=s?(s-d-(a?i*(a-o)/a:0))/s:0;r(.9*u)}}function d(e){i(e);var t=c.cacheControl(c[e]),r=c.companyName&&c.productName?c.cachedFetch:c.fetchWithProgress,o=c[e],a=/file:\/\//.exec(o)?"same-origin":void 0,s=r(c[e],{method:"GET",companyName:c.companyName,productName:c.productName,control:t,mode:a,onProgress:function(t){i(e,t)}});return s.then(function(e){return e.parsedBody}).catch(function(t){var r="Failed to download file "+c[e];"file:"==location.protocol?n(r+". Loading web pages via a file:// URL without a web server is not supported by this browser. Please use a local development web server to host Unity content, or use the Unity Build and Run option.","error"):console.error(r)})}function u(){return new Promise(function(e,t){var r=document.createElement("script");r.src=c.frameworkUrl,r.onload=function(){if("undefined"==typeof unityFramework||!unityFramework){var t=[["br","br"],["gz","gzip"]];for(var o in t){var a=t[o];if(c.frameworkUrl.endsWith("."+a[0])){var s="Unable to parse "+c.frameworkUrl+"!";if("file:"==location.protocol)return void n(s+" Loading pre-compressed (brotli or gzip) content via a file:// URL without a web server is not supported by this browser. Please use a local development web server to host compressed Unity content, or use the Unity Build and Run option.","error");if(s+=' This can happen if build compression was enabled but web server hosting the content was misconfigured to not serve the file with HTTP Response Header "Content-Encoding: '+a[1]+'" present. Check browser Console and Devtools Network tab to debug.',"br"==a[0]&&"http:"==location.protocol){var i=["localhost","127.0.0.1"].indexOf(location.hostname)!=-1?"":"Migrate your server to use HTTPS.";s=/Firefox/.test(navigator.userAgent)?"Unable to parse "+c.frameworkUrl+'! If using custom web server, verify that web server is sending .br files with HTTP Response Header "Content-Encoding: br". Brotli compression may not be supported in Firefox over HTTP connections. '+i+' See https://bugzilla.mozilla.org/show_bug.cgi?id=1670675 for more information.':"Unable to parse "+c.frameworkUrl+'! If using custom web server, verify that web server is sending .br files with HTTP Response Header "Content-Encoding: br". Brotli compression may not be supported over HTTP connections. Migrate your server to use HTTPS.'}return void n(s,"error")}}n("Unable to parse "+c.frameworkUrl+"! The file is corrupt, or compression was misconfigured? (check Content-Encoding HTTP Response Header on web server)","error")}var d=unityFramework;unityFramework=null,r.onload=null,e(d)},r.onerror=function(e){n("Unable to load file "+c.frameworkUrl+"! Check that the file exists on the remote server. (also check browser Console and Devtools Network tab to debug)","error")},document.body.appendChild(r),c.deinitializers.push(function(){document.body.removeChild(r)})})}function l(){u().then(function(e){e(c)});var e=d("dataUrl");c.preRun.push(function(){c.addRunDependency("dataUrl"),e.then(function(e){var t=new DataView(e.buffer,e.byteOffset,e.byteLength),r=0,n="UnityWebData1.0\0";if(!String.fromCharCode.apply(null,e.subarray(r,r+n.length))==n)throw"unknown data format";r+=n.length;var o=t.getUint32(r,!0);for(r+=4;r0;u=l,l=d.indexOf("/",u)+1)c.FS_createPath(d.substring(0,u),d.substring(u,l-1),!0,!0);c.FS_createDataFile(d,null,e.subarray(a,a+s),!0,!0,!0)}c.removeRunDependency("dataUrl")})})}r=r||function(){};var c={canvas:e,webglContextAttributes:{preserveDrawingBuffer:!1},cacheControl:function(e){return e==c.dataUrl?"must-revalidate":"no-store"},streamingAssetsUrl:"StreamingAssets",downloadProgress:{},deinitializers:[],intervals:{},setInterval:function(e,t){var r=window.setInterval(e,t);return this.intervals[r]=!0,r},clearInterval:function(e){delete this.intervals[e],window.clearInterval(e)},preRun:[],postRun:[],print:function(e){console.log(e)},printErr:function(e){console.error(e),"string"==typeof e&&e.indexOf("wasm streaming compile failed")!=-1&&(e.toLowerCase().indexOf("mime")!=-1?n('HTTP Response Header "Content-Type" configured incorrectly on the server for file '+c.codeUrl+' , should be "application/wasm". Startup time performance will suffer.',"warning"):n('WebAssembly streaming compilation failed! This can happen for example if "Content-Encoding" HTTP header is incorrectly enabled on the server for file '+c.codeUrl+", but the file is not pre-compressed on disk (or vice versa). Check the Network tab in browser Devtools to debug server header configuration.","warning"))},locateFile:function(e){return"build.wasm"==e?this.codeUrl:e},disabledCanvasEvents:["contextmenu","dragstart"]};for(var h in t)c[h]=t[h];c.streamingAssetsUrl=new URL(c.streamingAssetsUrl,document.URL).href;var f=c.disabledCanvasEvents.slice();f.forEach(function(t){e.addEventListener(t,a)}),window.addEventListener("error",o),window.addEventListener("unhandledrejection",o),c.deinitializers.push(function(){c.disableAccessToMediaDevices(),f.forEach(function(t){e.removeEventListener(t,a)}),window.removeEventListener("error",o),window.removeEventListener("unhandledrejection",o);for(var t in c.intervals)window.clearInterval(t);c.intervals={}}),c.QuitCleanup=function(){for(var e=0;e=200&&this.status<=299}.bind(this)})}function o(e,t,r,n,o){var a={url:e,version:d.version,company:t,product:r,updated:n,revalidated:n,accessed:n,response:{headers:{}}};return o&&(o.headers.forEach(function(e,t){a.response.headers[t]=e}),["redirected","status","statusText","type","url"].forEach(function(e){a.response[e]=o[e]}),a.response.parsedBody=o.parsedBody),a}function a(e,t){return(!t||!t.method||"GET"===t.method)&&((!t||["must-revalidate","immutable"].indexOf(t.control)!=-1)&&!!e.match("^https?://"))}function s(s,l){function c(t,r){return u(t,r).then(function(t){return!g.enabled||g.revalidated?t:304===t.status?(g.result.revalidated=g.result.accessed,g.revalidated=!0,f.storeRequest(g.result).then(function(){e("'"+g.result.url+"' successfully revalidated and served from the indexedDB cache")}).catch(function(t){e("'"+g.result.url+"' successfully revalidated but not stored in the indexedDB cache due to the error: "+t)}),new n(g.result.response)):(200==t.status?(g.result=o(t.url,g.company,g.product,g.accessed,t),g.revalidated=!0,f.storeRequest(g.result).then(function(){e("'"+g.result.url+"' successfully downloaded and stored in the indexedDB cache")}).catch(function(t){e("'"+g.result.url+"' successfully downloaded but not stored in the indexedDB cache due to the error: "+t)})):e("'"+g.result.url+"' request failed with status: "+t.status+" "+t.statusText),t)})}function h(e){l&&l.onProgress&&(l.onProgress({type:"progress",total:e.parsedBody.length,loaded:e.parsedBody.length,lengthComputable:!0}),l.onProgress({type:"load",total:e.parsedBody.length,loaded:e.parsedBody.length,lengthComputable:!0}))}var f=i.getInstance(),p=t("string"==typeof s?s:s.url),g={enabled:a(p,l)};return l&&(g.control=l.control,g.company=l.company,g.product=l.product),g.result=o(p,g.company,g.product,Date.now()),g.revalidated=!1,g.enabled?f.loadRequest(g.result.url).then(function(t){if(!t||t.version!==d.version)return c(s,l);g.result=t,g.result.accessed=Date.now();var o=new n(g.result.response);if("immutable"==g.control)return g.revalidated=!0,f.storeRequest(g.result),e("'"+g.result.url+"' served from the indexedDB cache without revalidation"),h(o),o;if(r(g.result.url)&&(o.headers.get("Last-Modified")||o.headers.get("ETag")))return fetch(g.result.url,{method:"HEAD"}).then(function(t){return g.revalidated=["Last-Modified","ETag"].every(function(e){return!o.headers.get(e)||o.headers.get(e)==t.headers.get(e)}),g.revalidated?(g.result.revalidated=g.result.accessed,f.storeRequest(g.result),e("'"+g.result.url+"' successfully revalidated and served from the indexedDB cache"),h(o),o):c(s,l)});l=l||{};var a=l.headers||{};return l.headers=a,o.headers.get("Last-Modified")?(a["If-Modified-Since"]=o.headers.get("Last-Modified"),a["Cache-Control"]="no-cache"):o.headers.get("ETag")&&(a["If-None-Match"]=o.headers.get("ETag"),a["Cache-Control"]="no-cache"),c(s,l)}).catch(function(t){return e("Failed to load '"+g.result.url+"' from indexedDB cache due to the error: "+t),u(s,l)}):u(s,l)}var i=c.UnityCache,d=i.RequestStore,u=c.fetchWithProgress;return n.prototype.arrayBuffer=function(){return Promise.resolve(this.parsedBody.buffer)},n.prototype.blob=function(){return this.arrayBuffer().then(function(e){return new Blob([e])})},n.prototype.json=function(){return this.text().then(function(e){return JSON.parse(e)})},n.prototype.text=function(){var e=new TextDecoder;return Promise.resolve(e.decode(this.parsedBody))},s}(),new Promise(function(e,t){c.SystemInfo.hasWebGL?1==c.SystemInfo.hasWebGL?t('Your browser does not support graphics API "WebGL 2" which is required for this content.'):c.SystemInfo.hasWasm?(1==c.SystemInfo.hasWebGL&&c.print('Warning: Your browser does not support "WebGL 2" Graphics API, switching to "WebGL 1"'),c.startupErrorHandler=t,r(0),c.postRun.push(function(){r(1),delete c.startupErrorHandler,e(m)}),l()):t("Your browser does not support WebAssembly."):t("Your browser does not support WebGL.")})}
\ No newline at end of file
diff --git a/spaces/eIysia/VITS-Umamusume-voice-synthesizer/data_utils.py b/spaces/eIysia/VITS-Umamusume-voice-synthesizer/data_utils.py
deleted file mode 100644
index e9246c6c8f2ff3c37a7f8529ea1593c7f80f887e..0000000000000000000000000000000000000000
--- a/spaces/eIysia/VITS-Umamusume-voice-synthesizer/data_utils.py
+++ /dev/null
@@ -1,393 +0,0 @@
-import time
-import os
-import random
-import numpy as np
-import torch
-import torch.utils.data
-
-import commons
-from mel_processing import spectrogram_torch
-from utils import load_wav_to_torch, load_filepaths_and_text
-from text import text_to_sequence, cleaned_text_to_sequence
-
-
-class TextAudioLoader(torch.utils.data.Dataset):
- """
- 1) loads audio, text pairs
- 2) normalizes text and converts them to sequences of integers
- 3) computes spectrograms from audio files.
- """
- def __init__(self, audiopaths_and_text, hparams):
- self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text)
- self.text_cleaners = hparams.text_cleaners
- self.max_wav_value = hparams.max_wav_value
- self.sampling_rate = hparams.sampling_rate
- self.filter_length = hparams.filter_length
- self.hop_length = hparams.hop_length
- self.win_length = hparams.win_length
- self.sampling_rate = hparams.sampling_rate
-
- self.cleaned_text = getattr(hparams, "cleaned_text", False)
-
- self.add_blank = hparams.add_blank
- self.min_text_len = getattr(hparams, "min_text_len", 1)
- self.max_text_len = getattr(hparams, "max_text_len", 190)
-
- random.seed(1234)
- random.shuffle(self.audiopaths_and_text)
- self._filter()
-
-
- def _filter(self):
- """
- Filter text & store spec lengths
- """
- # Store spectrogram lengths for Bucketing
- # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
- # spec_length = wav_length // hop_length
-
- audiopaths_and_text_new = []
- lengths = []
- for audiopath, text in self.audiopaths_and_text:
- if self.min_text_len <= len(text) and len(text) <= self.max_text_len:
- audiopaths_and_text_new.append([audiopath, text])
- lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length))
- self.audiopaths_and_text = audiopaths_and_text_new
- self.lengths = lengths
-
- def get_audio_text_pair(self, audiopath_and_text):
- # separate filename and text
- audiopath, text = audiopath_and_text[0], audiopath_and_text[1]
- text = self.get_text(text)
- spec, wav = self.get_audio(audiopath)
- return (text, spec, wav)
-
- def get_audio(self, filename):
- audio, sampling_rate = load_wav_to_torch(filename)
- if sampling_rate != self.sampling_rate:
- raise ValueError("{} {} SR doesn't match target {} SR".format(
- sampling_rate, self.sampling_rate))
- audio_norm = audio / self.max_wav_value
- audio_norm = audio_norm.unsqueeze(0)
- spec_filename = filename.replace(".wav", ".spec.pt")
- if os.path.exists(spec_filename):
- spec = torch.load(spec_filename)
- else:
- spec = spectrogram_torch(audio_norm, self.filter_length,
- self.sampling_rate, self.hop_length, self.win_length,
- center=False)
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename)
- return spec, audio_norm
-
- def get_text(self, text):
- if self.cleaned_text:
- text_norm = cleaned_text_to_sequence(text)
- else:
- text_norm = text_to_sequence(text, self.text_cleaners)
- if self.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = torch.LongTensor(text_norm)
- return text_norm
-
- def __getitem__(self, index):
- return self.get_audio_text_pair(self.audiopaths_and_text[index])
-
- def __len__(self):
- return len(self.audiopaths_and_text)
-
-
-class TextAudioCollate():
- """ Zero-pads model inputs and targets
- """
- def __init__(self, return_ids=False):
- self.return_ids = return_ids
-
- def __call__(self, batch):
- """Collate's training batch from normalized text and aduio
- PARAMS
- ------
- batch: [text_normalized, spec_normalized, wav_normalized]
- """
- # Right zero-pad all one-hot text sequences to max input length
- _, ids_sorted_decreasing = torch.sort(
- torch.LongTensor([x[1].size(1) for x in batch]),
- dim=0, descending=True)
-
- max_text_len = max([len(x[0]) for x in batch])
- max_spec_len = max([x[1].size(1) for x in batch])
- max_wav_len = max([x[2].size(1) for x in batch])
-
- text_lengths = torch.LongTensor(len(batch))
- spec_lengths = torch.LongTensor(len(batch))
- wav_lengths = torch.LongTensor(len(batch))
-
- text_padded = torch.LongTensor(len(batch), max_text_len)
- spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len)
- wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len)
- text_padded.zero_()
- spec_padded.zero_()
- wav_padded.zero_()
- for i in range(len(ids_sorted_decreasing)):
- row = batch[ids_sorted_decreasing[i]]
-
- text = row[0]
- text_padded[i, :text.size(0)] = text
- text_lengths[i] = text.size(0)
-
- spec = row[1]
- spec_padded[i, :, :spec.size(1)] = spec
- spec_lengths[i] = spec.size(1)
-
- wav = row[2]
- wav_padded[i, :, :wav.size(1)] = wav
- wav_lengths[i] = wav.size(1)
-
- if self.return_ids:
- return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, ids_sorted_decreasing
- return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths
-
-
-"""Multi speaker version"""
-class TextAudioSpeakerLoader(torch.utils.data.Dataset):
- """
- 1) loads audio, speaker_id, text pairs
- 2) normalizes text and converts them to sequences of integers
- 3) computes spectrograms from audio files.
- """
- def __init__(self, audiopaths_sid_text, hparams):
- self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text)
- self.text_cleaners = hparams.text_cleaners
- self.max_wav_value = hparams.max_wav_value
- self.sampling_rate = hparams.sampling_rate
- self.filter_length = hparams.filter_length
- self.hop_length = hparams.hop_length
- self.win_length = hparams.win_length
- self.sampling_rate = hparams.sampling_rate
-
- self.cleaned_text = getattr(hparams, "cleaned_text", False)
-
- self.add_blank = hparams.add_blank
- self.min_text_len = getattr(hparams, "min_text_len", 1)
- self.max_text_len = getattr(hparams, "max_text_len", 190)
-
- random.seed(1234)
- random.shuffle(self.audiopaths_sid_text)
- self._filter()
-
- def _filter(self):
- """
- Filter text & store spec lengths
- """
- # Store spectrogram lengths for Bucketing
- # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
- # spec_length = wav_length // hop_length
-
- audiopaths_sid_text_new = []
- lengths = []
- for audiopath, sid, text in self.audiopaths_sid_text:
- audiopath = "E:/uma_voice/" + audiopath
- if self.min_text_len <= len(text) and len(text) <= self.max_text_len:
- audiopaths_sid_text_new.append([audiopath, sid, text])
- lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length))
- self.audiopaths_sid_text = audiopaths_sid_text_new
- self.lengths = lengths
-
- def get_audio_text_speaker_pair(self, audiopath_sid_text):
- # separate filename, speaker_id and text
- audiopath, sid, text = audiopath_sid_text[0], audiopath_sid_text[1], audiopath_sid_text[2]
- text = self.get_text(text)
- spec, wav = self.get_audio(audiopath)
- sid = self.get_sid(sid)
- return (text, spec, wav, sid)
-
- def get_audio(self, filename):
- audio, sampling_rate = load_wav_to_torch(filename)
- if sampling_rate != self.sampling_rate:
- raise ValueError("{} {} SR doesn't match target {} SR".format(
- sampling_rate, self.sampling_rate))
- audio_norm = audio / self.max_wav_value
- audio_norm = audio_norm.unsqueeze(0)
- spec_filename = filename.replace(".wav", ".spec.pt")
- if os.path.exists(spec_filename):
- spec = torch.load(spec_filename)
- else:
- spec = spectrogram_torch(audio_norm, self.filter_length,
- self.sampling_rate, self.hop_length, self.win_length,
- center=False)
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename)
- return spec, audio_norm
-
- def get_text(self, text):
- if self.cleaned_text:
- text_norm = cleaned_text_to_sequence(text)
- else:
- text_norm = text_to_sequence(text, self.text_cleaners)
- if self.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = torch.LongTensor(text_norm)
- return text_norm
-
- def get_sid(self, sid):
- sid = torch.LongTensor([int(sid)])
- return sid
-
- def __getitem__(self, index):
- return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index])
-
- def __len__(self):
- return len(self.audiopaths_sid_text)
-
-
-class TextAudioSpeakerCollate():
- """ Zero-pads model inputs and targets
- """
- def __init__(self, return_ids=False):
- self.return_ids = return_ids
-
- def __call__(self, batch):
- """Collate's training batch from normalized text, audio and speaker identities
- PARAMS
- ------
- batch: [text_normalized, spec_normalized, wav_normalized, sid]
- """
- # Right zero-pad all one-hot text sequences to max input length
- _, ids_sorted_decreasing = torch.sort(
- torch.LongTensor([x[1].size(1) for x in batch]),
- dim=0, descending=True)
-
- max_text_len = max([len(x[0]) for x in batch])
- max_spec_len = max([x[1].size(1) for x in batch])
- max_wav_len = max([x[2].size(1) for x in batch])
-
- text_lengths = torch.LongTensor(len(batch))
- spec_lengths = torch.LongTensor(len(batch))
- wav_lengths = torch.LongTensor(len(batch))
- sid = torch.LongTensor(len(batch))
-
- text_padded = torch.LongTensor(len(batch), max_text_len)
- spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len)
- wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len)
- text_padded.zero_()
- spec_padded.zero_()
- wav_padded.zero_()
- for i in range(len(ids_sorted_decreasing)):
- row = batch[ids_sorted_decreasing[i]]
-
- text = row[0]
- text_padded[i, :text.size(0)] = text
- text_lengths[i] = text.size(0)
-
- spec = row[1]
- spec_padded[i, :, :spec.size(1)] = spec
- spec_lengths[i] = spec.size(1)
-
- wav = row[2]
- wav_padded[i, :, :wav.size(1)] = wav
- wav_lengths[i] = wav.size(1)
-
- sid[i] = row[3]
-
- if self.return_ids:
- return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, ids_sorted_decreasing
- return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid
-
-
-class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler):
- """
- Maintain similar input lengths in a batch.
- Length groups are specified by boundaries.
- Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}.
-
- It removes samples which are not included in the boundaries.
- Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded.
- """
- def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True):
- super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)
- self.lengths = dataset.lengths
- self.batch_size = batch_size
- self.boundaries = boundaries
-
- self.buckets, self.num_samples_per_bucket = self._create_buckets()
- self.total_size = sum(self.num_samples_per_bucket)
- self.num_samples = self.total_size // self.num_replicas
-
- def _create_buckets(self):
- buckets = [[] for _ in range(len(self.boundaries) - 1)]
- for i in range(len(self.lengths)):
- length = self.lengths[i]
- idx_bucket = self._bisect(length)
- if idx_bucket != -1:
- buckets[idx_bucket].append(i)
-
- for i in range(len(buckets) - 1, 0, -1):
- if len(buckets[i]) == 0:
- buckets.pop(i)
- self.boundaries.pop(i+1)
-
- num_samples_per_bucket = []
- for i in range(len(buckets)):
- len_bucket = len(buckets[i])
- total_batch_size = self.num_replicas * self.batch_size
- rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size
- num_samples_per_bucket.append(len_bucket + rem)
- return buckets, num_samples_per_bucket
-
- def __iter__(self):
- # deterministically shuffle based on epoch
- g = torch.Generator()
- g.manual_seed(self.epoch)
-
- indices = []
- if self.shuffle:
- for bucket in self.buckets:
- indices.append(torch.randperm(len(bucket), generator=g).tolist())
- else:
- for bucket in self.buckets:
- indices.append(list(range(len(bucket))))
-
- batches = []
- for i in range(len(self.buckets)):
- bucket = self.buckets[i]
- len_bucket = len(bucket)
- ids_bucket = indices[i]
- num_samples_bucket = self.num_samples_per_bucket[i]
-
- # add extra samples to make it evenly divisible
- rem = num_samples_bucket - len_bucket
- ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)]
-
- # subsample
- ids_bucket = ids_bucket[self.rank::self.num_replicas]
-
- # batching
- for j in range(len(ids_bucket) // self.batch_size):
- batch = [bucket[idx] for idx in ids_bucket[j*self.batch_size:(j+1)*self.batch_size]]
- batches.append(batch)
-
- if self.shuffle:
- batch_ids = torch.randperm(len(batches), generator=g).tolist()
- batches = [batches[i] for i in batch_ids]
- self.batches = batches
-
- assert len(self.batches) * self.batch_size == self.num_samples
- return iter(self.batches)
-
- def _bisect(self, x, lo=0, hi=None):
- if hi is None:
- hi = len(self.boundaries) - 1
-
- if hi > lo:
- mid = (hi + lo) // 2
- if self.boundaries[mid] < x and x <= self.boundaries[mid+1]:
- return mid
- elif x <= self.boundaries[mid]:
- return self._bisect(x, lo, mid)
- else:
- return self._bisect(x, mid + 1, hi)
- else:
- return -1
-
- def __len__(self):
- return self.num_samples // self.batch_size
diff --git a/spaces/eaedk/Tuto_Sentiment_Analysis_App/README.md b/spaces/eaedk/Tuto_Sentiment_Analysis_App/README.md
deleted file mode 100644
index 88cfb45fdacf2c74ea4c9afc85e8a0c38911de3b..0000000000000000000000000000000000000000
--- a/spaces/eaedk/Tuto_Sentiment_Analysis_App/README.md
+++ /dev/null
@@ -1,18 +0,0 @@
----
-title: Tuto Sentiment Analysis App
-emoji: 🔥
-pinned: False
-sdk: gradio
-sdk_version: 3.28.0
-app_file: app.py
----
-# Tuto Sentiment Analysis App
-This Sentiment Analysis App is a tweets classifer system based on a [Hugginface pretrained model (DistillBERT)](https://huggingface.co/docs/transformers/model_doc/distilbert), [finetuned](https://huggingface.co/GhylB/Sentiment_Analysis_DistilBERT) by one of my brilliant trainees [Mr. Gilbert Botchway](https://www.linkedin.com/in/gilbert-botchway/) on the [Zindi Covid-19 tweets classification dataset](https://zindi.africa/competitions/covid-19-tweet-classification)
-
-## Setup
-### Direct Execution
-Please follow the instructions below to run the app.
-`commands will be added soon`
-### Docker
-Please follow the instructions below to run the app.
-`commands will be added soon`
\ No newline at end of file
diff --git a/spaces/ebgoldstein/FRF_Heavies/README.md b/spaces/ebgoldstein/FRF_Heavies/README.md
deleted file mode 100644
index 209b943ee41776af918de134bcbffe48a804d32f..0000000000000000000000000000000000000000
--- a/spaces/ebgoldstein/FRF_Heavies/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: FRF Heavy Minerals
-emoji: 📉
-colorFrom: red
-colorTo: gray
-sdk: gradio
-app_file: app.py
-pinned: false
-duplicated_from: ebgoldstein/FRFArgus
----
diff --git a/spaces/enochianborg/stable-diffusion-webui-vorstcavry/README.md b/spaces/enochianborg/stable-diffusion-webui-vorstcavry/README.md
deleted file mode 100644
index 83cefef74be556c587f01c9c050ae1931080026f..0000000000000000000000000000000000000000
--- a/spaces/enochianborg/stable-diffusion-webui-vorstcavry/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Stable Diffusion Webui
-emoji: 🚀
-colorFrom: blue
-colorTo: blue
-sdk: gradio
-sdk_version: 3.44.4
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/eson/tokenizer-arena/vocab/icetk/README.md b/spaces/eson/tokenizer-arena/vocab/icetk/README.md
deleted file mode 100644
index f26f84a0d10b9a853b8a162b5be6d02394432d96..0000000000000000000000000000000000000000
--- a/spaces/eson/tokenizer-arena/vocab/icetk/README.md
+++ /dev/null
@@ -1,72 +0,0 @@
-
-
-## 简介
-
-```
-num_image_tokens = 20000 image_tokenizer
-num_text_tokens = 130000 text_tokenizer
-```
-
-一共 150000
-
-## text_tokenizer
-
-```
-tokenizer.sp_tokenizer.text_tokenizer.proto.pieces[0] 对应 20000+0
-piece: ""
-score: 0.0
-type: UNKNOWN
-
-tokenizer.sp_tokenizer.text_tokenizer.proto.pieces[1] 对应 20000+1
-piece: ""
-score: 0.0
-type: CONTROL
-
-tokenizer.sp_tokenizer.text_tokenizer.proto.pieces[2] 对应 20000+2
-piece: ""
-score: 0.0
-type: CONTROL
-
-tokenizer.sp_tokenizer.text_tokenizer.proto.pieces[3] 对应 20000+3
-piece: ""
-score: 0.0
-type: CONTROL
-
-tokenizer.sp_tokenizer.text_tokenizer.proto.pieces[4] 对应 20000+4
-piece: ""
-score: 0.0
-type: USER_DEFINED
-
-tokenizer.sp_tokenizer.text_tokenizer.proto.pieces[5] 对应 20000+5
-piece: "\342\226\201"
-score: -2.6171817779541016
-type: NORMAL
-
-tokenizer.sp_tokenizer.text_tokenizer.proto.pieces[6]
-piece: ","
-score: -3.151700019836426
-type: NORMAL
-
-
-tokenizer.sp_tokenizer.text_tokenizer.proto.pieces[50]
-piece: "{"
-score: -7.532660961151123
-type: NORMAL
-
-tokenizer.sp_tokenizer.text_tokenizer.proto.pieces[100]
-piece: "\342\226\201the" # "\342\226\201" 这是啥??
-score: -3.922896385192871
-type: NORMAL
-
-tokenizer.sp_tokenizer.text_tokenizer.proto.pieces[200]
-piece: "\342\226\201This"
-score: -7.821105480194092
-type: NORMAL
-
-
-tokenizer.sp_tokenizer.text_tokenizer.proto.pieces[128293]
-piece: "\342\226\201pa\303\255ses"
-score: -14.182646751403809
-type: NORMAL
-```
-
diff --git a/spaces/ewgewgewg/IndexingAlpha/app.py b/spaces/ewgewgewg/IndexingAlpha/app.py
deleted file mode 100644
index 21807dd968c7e8ac0cbaa32e31437fcdf15066ff..0000000000000000000000000000000000000000
--- a/spaces/ewgewgewg/IndexingAlpha/app.py
+++ /dev/null
@@ -1,80 +0,0 @@
-# GNU
-import gradio as gr
-from generate import generate
-
-demo = gr.Blocks()
-
-def attempted_items_changer(attempted_items_input):
- if (not attempted_items_input.isdigit()):
- return {
- attempted_items: 50
- }
- return {
- attempted_items: max(int(attempted_items_input), 0)
- }
-
-def offset_changer(offset_input):
- if(not offset_input.isdigit() and not (offset_input[0] == '-' and offset_input[1:].isdigit())):
- return {
- offset: 0
- }
- return {
- offset: int(offset_input)
- }
-
-def custom_changer (custom_input):
- return {
- custom: custom_input
- }
-
-with demo:
-
- attempted_items = gr.State(50)
- offset = gr.State(0)
- custom = gr.State("")
-
- gr.Markdown("# PDF to Index")
-
- with gr.Column():
-
- gr.Markdown("### Load Inputs")
-
- uploaded_file = gr.File(
- label="Upload a PDF file",
- file_count="single",
- type="file"
- )
-
- with gr.Row():
- attempted_items_input = gr.Textbox(value="50", show_label=True, label="Attempted Generated Items")
- offset_input = gr.Textbox(value="0", show_label=True, label="Page Offset")
- attempted_items_input.change(attempted_items_changer, [attempted_items_input], [attempted_items])
- offset_input.change(offset_changer, [offset_input], [offset])
-
- gr.HTML("
Attempted Generated Items is the number of terms intended to be automatically generated for index (output may be slightly lower), while Page Offset is a value added to each page number found in the file. In the case of invalid values, Attempted Items will default to 50 and Page Offset will default to 0. If the fields do not produce expected values, you may be clicking too quickly -- please adjust the field, wait, and try again.
You can add semicolon-separated values in Custom to add custom fields to index. Optionally, you can comma-separate terms between semicolons if you want multiple terms to contribute to a single index entry -- the first term will be the label for the index entry. If Custom does not produce expected values, you may be clicking too quickly -- please adjust the field, wait, and try again.
")
-
-
- gr.Markdown("---")
-
- with gr.Column():
- gr.Markdown("### Index From PDF")
- convert_button = gr.Button("Generate Index From PDF", variant="primary")
- out_placeholder = gr.HTML('
Output will appear below, with PyPDF2 for preprocessing and yake for processing:
')
- gr.Markdown("### Index")
- index = gr.Textbox(
- label="Index", placeholder="The index will appear here"
- )
-
- convert_button.click(
- fn=generate,
- inputs=[uploaded_file, attempted_items, offset, custom],
- outputs=[index],
- )
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/f2api/gpt-academic/request_llm/bridge_chatglm.py b/spaces/f2api/gpt-academic/request_llm/bridge_chatglm.py
deleted file mode 100644
index 100783d248c4cd6dcbdb091181ac21f0f66af670..0000000000000000000000000000000000000000
--- a/spaces/f2api/gpt-academic/request_llm/bridge_chatglm.py
+++ /dev/null
@@ -1,161 +0,0 @@
-
-from transformers import AutoModel, AutoTokenizer
-import time
-import threading
-import importlib
-from toolbox import update_ui, get_conf
-from multiprocessing import Process, Pipe
-
-load_message = "ChatGLM尚未加载,加载需要一段时间。注意,取决于`config.py`的配置,ChatGLM消耗大量的内存(CPU)或显存(GPU),也许会导致低配计算机卡死 ……"
-
-#################################################################################
-class GetGLMHandle(Process):
- def __init__(self):
- super().__init__(daemon=True)
- self.parent, self.child = Pipe()
- self.chatglm_model = None
- self.chatglm_tokenizer = None
- self.info = ""
- self.success = True
- self.check_dependency()
- self.start()
- self.threadLock = threading.Lock()
-
- def check_dependency(self):
- try:
- import sentencepiece
- self.info = "依赖检测通过"
- self.success = True
- except:
- self.info = "缺少ChatGLM的依赖,如果要使用ChatGLM,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_chatglm.txt`安装ChatGLM的依赖。"
- self.success = False
-
- def ready(self):
- return self.chatglm_model is not None
-
- def run(self):
- # 子进程执行
- # 第一次运行,加载参数
- retry = 0
- while True:
- try:
- if self.chatglm_model is None:
- self.chatglm_tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)
- device, = get_conf('LOCAL_MODEL_DEVICE')
- if device=='cpu':
- self.chatglm_model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).float()
- else:
- self.chatglm_model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda()
- self.chatglm_model = self.chatglm_model.eval()
- break
- else:
- break
- except:
- retry += 1
- if retry > 3:
- self.child.send('[Local Message] Call ChatGLM fail 不能正常加载ChatGLM的参数。')
- raise RuntimeError("不能正常加载ChatGLM的参数!")
-
- while True:
- # 进入任务等待状态
- kwargs = self.child.recv()
- # 收到消息,开始请求
- try:
- for response, history in self.chatglm_model.stream_chat(self.chatglm_tokenizer, **kwargs):
- self.child.send(response)
- # # 中途接收可能的终止指令(如果有的话)
- # if self.child.poll():
- # command = self.child.recv()
- # if command == '[Terminate]': break
- except:
- from toolbox import trimmed_format_exc
- self.child.send('[Local Message] Call ChatGLM fail.' + '\n```\n' + trimmed_format_exc() + '\n```\n')
- # 请求处理结束,开始下一个循环
- self.child.send('[Finish]')
-
- def stream_chat(self, **kwargs):
- # 主进程执行
- self.threadLock.acquire()
- self.parent.send(kwargs)
- while True:
- res = self.parent.recv()
- if res != '[Finish]':
- yield res
- else:
- break
- self.threadLock.release()
-
-global glm_handle
-glm_handle = None
-#################################################################################
-def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
- """
- 多线程方法
- 函数的说明请见 request_llm/bridge_all.py
- """
- global glm_handle
- if glm_handle is None:
- glm_handle = GetGLMHandle()
- if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + glm_handle.info
- if not glm_handle.success:
- error = glm_handle.info
- glm_handle = None
- raise RuntimeError(error)
-
- # chatglm 没有 sys_prompt 接口,因此把prompt加入 history
- history_feedin = []
- history_feedin.append(["What can I do?", sys_prompt])
- for i in range(len(history)//2):
- history_feedin.append([history[2*i], history[2*i+1]] )
-
- watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
- response = ""
- for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
- if len(observe_window) >= 1: observe_window[0] = response
- if len(observe_window) >= 2:
- if (time.time()-observe_window[1]) > watch_dog_patience:
- raise RuntimeError("程序终止。")
- return response
-
-
-
-def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
- """
- 单线程方法
- 函数的说明请见 request_llm/bridge_all.py
- """
- chatbot.append((inputs, ""))
-
- global glm_handle
- if glm_handle is None:
- glm_handle = GetGLMHandle()
- chatbot[-1] = (inputs, load_message + "\n\n" + glm_handle.info)
- yield from update_ui(chatbot=chatbot, history=[])
- if not glm_handle.success:
- glm_handle = None
- return
-
- if additional_fn is not None:
- import core_functional
- importlib.reload(core_functional) # 热更新prompt
- core_functional = core_functional.get_core_functions()
- if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
- inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
-
- # 处理历史信息
- history_feedin = []
- history_feedin.append(["What can I do?", system_prompt] )
- for i in range(len(history)//2):
- history_feedin.append([history[2*i], history[2*i+1]] )
-
- # 开始接收chatglm的回复
- response = "[Local Message]: 等待ChatGLM响应中 ..."
- for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
- chatbot[-1] = (inputs, response)
- yield from update_ui(chatbot=chatbot, history=history)
-
- # 总结输出
- if response == "[Local Message]: 等待ChatGLM响应中 ...":
- response = "[Local Message]: ChatGLM响应异常 ..."
- history.extend([inputs, response])
- yield from update_ui(chatbot=chatbot, history=history)
diff --git a/spaces/facebook/StyleNeRF/viz/trunc_noise_widget.py b/spaces/facebook/StyleNeRF/viz/trunc_noise_widget.py
deleted file mode 100644
index dda852b159bd8f2864fe6f6b87de9677e3e41625..0000000000000000000000000000000000000000
--- a/spaces/facebook/StyleNeRF/viz/trunc_noise_widget.py
+++ /dev/null
@@ -1,75 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-import imgui
-from gui_utils import imgui_utils
-
-#----------------------------------------------------------------------------
-
-class TruncationNoiseWidget:
- def __init__(self, viz):
- self.viz = viz
- self.prev_num_ws = 0
- self.trunc_psi = 1
- self.trunc_cutoff = 0
- self.noise_enable = True
- self.noise_seed = 0
- self.noise_anim = False
-
- @imgui_utils.scoped_by_object_id
- def __call__(self, show=True):
- viz = self.viz
- num_ws = viz.result.get('num_ws', 0)
- has_noise = viz.result.get('has_noise', False)
- if num_ws > 0 and num_ws != self.prev_num_ws:
- if self.trunc_cutoff > num_ws or self.trunc_cutoff == self.prev_num_ws:
- self.trunc_cutoff = num_ws
- self.prev_num_ws = num_ws
-
- if show:
- imgui.text('Truncate')
- imgui.same_line(viz.label_w)
- with imgui_utils.item_width(viz.font_size * 10), imgui_utils.grayed_out(num_ws == 0):
- _changed, self.trunc_psi = imgui.slider_float('##psi', self.trunc_psi, -1, 2, format='Psi %.2f')
- imgui.same_line()
- if num_ws == 0:
- imgui_utils.button('Cutoff 0', width=(viz.font_size * 8 + viz.spacing), enabled=False)
- else:
- with imgui_utils.item_width(viz.font_size * 8 + viz.spacing):
- changed, new_cutoff = imgui.slider_int('##cutoff', self.trunc_cutoff, 0, num_ws, format='Cutoff %d')
- if changed:
- self.trunc_cutoff = min(max(new_cutoff, 0), num_ws)
-
- with imgui_utils.grayed_out(not has_noise):
- imgui.same_line()
- _clicked, self.noise_enable = imgui.checkbox('Noise##enable', self.noise_enable)
- imgui.same_line(round(viz.font_size * 27.7))
- with imgui_utils.grayed_out(not self.noise_enable):
- with imgui_utils.item_width(-1 - viz.button_w - viz.spacing - viz.font_size * 4):
- _changed, self.noise_seed = imgui.input_int('##seed', self.noise_seed)
- imgui.same_line(spacing=0)
- _clicked, self.noise_anim = imgui.checkbox('Anim##noise', self.noise_anim)
-
- is_def_trunc = (self.trunc_psi == 1 and self.trunc_cutoff == num_ws)
- is_def_noise = (self.noise_enable and self.noise_seed == 0 and not self.noise_anim)
- with imgui_utils.grayed_out(is_def_trunc and not has_noise):
- imgui.same_line(imgui.get_content_region_max()[0] - 1 - viz.button_w)
- if imgui_utils.button('Reset', width=-1, enabled=(not is_def_trunc or not is_def_noise)):
- self.prev_num_ws = num_ws
- self.trunc_psi = 1
- self.trunc_cutoff = num_ws
- self.noise_enable = True
- self.noise_seed = 0
- self.noise_anim = False
-
- if self.noise_anim:
- self.noise_seed += 1
- viz.args.update(trunc_psi=self.trunc_psi, trunc_cutoff=self.trunc_cutoff, random_seed=self.noise_seed)
- viz.args.noise_mode = ('none' if not self.noise_enable else 'const' if self.noise_seed == 0 else 'random')
-
-#----------------------------------------------------------------------------
diff --git a/spaces/falterWliame/Face_Mask_Detection/Al Azkar Imam Nawawi Pdf Download WORK.md b/spaces/falterWliame/Face_Mask_Detection/Al Azkar Imam Nawawi Pdf Download WORK.md
deleted file mode 100644
index 27f3fc77df9eff6969053db0b6b18ec03438cfb8..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Al Azkar Imam Nawawi Pdf Download WORK.md
+++ /dev/null
@@ -1,45 +0,0 @@
-
-
Al Azkar Imam Nawawi PDF Download: A Free and Authentic Source of Islamic Supplications and Remembrances
-
-
If you are looking for a reliable and authentic source of Islamic supplications and remembrances, you might want to check out al azkar imam nawawi pdf download. This is a PDF file that contains the book Kitab al-Adhkar by Imam Yahya ibn Sharaf an-Nawawi, a famous scholar and jurist of the Shafi'i school of Islamic law.
Al azkar imam nawawi pdf download is a comprehensive collection of supplications and remembrances that are related from the Prophet Muhammad (peace be upon him) and his companions. The book covers various topics, such as the daily prayers, the morning and evening remembrances, the virtues of reciting the Quran, the supplications for different occasions and situations, the remembrances for the night and day, the etiquettes of sleeping and waking up, the supplications for protection and healing, and the remembrances for death and the hereafter.
-
-
The book is written in a clear and concise manner, with references to the sources of each supplication and remembrance. The book also includes explanatory notes and comments by Imam an-Nawawi on some of the supplications and remembrances. The book is divided into 17 chapters, each of which covers a different category of supplications and remembrances.
-
-
Why You Should Download Al Azkar Imam Nawawi PDF
-
-
There are many reasons why you should download al azkar imam nawawi pdf if you are interested in Islamic supplications and remembrances. Here are some of them:
-
-
-
Al azkar imam nawawi pdf is a free download that you can access anytime and anywhere. You don't need to pay anything or sign up for anything to get this valuable resource.
-
Al azkar imam nawawi pdf is a PDF file that you can easily read on your computer, tablet, or smartphone. You can also print it out if you prefer a hard copy.
-
Al azkar imam nawawi pdf is a comprehensive and authentic collection of supplications and remembrances that are based on the Quran and Sunnah. It is very useful for all Muslims who want to increase their connection with Allah and seek His blessings and mercy.
-
Al azkar imam nawawi pdf is written by an eminent scholar who has a deep knowledge and understanding of Islamic sciences. He has authored several books on various topics of Islamic law, theology, hadith, ethics, and spirituality.
-
Al azkar imam nawawi pdf is a user-friendly book that has a clear and concise style, with references to the sources of each supplication and remembrance. It also has explanatory notes and comments by Imam an-Nawawi on some of the supplications and remembrances.
-
-
-
How to Download Al Azkar Imam Nawawi PDF
-
-
If you want to download al azkar imam nawawi pdf, you can follow these simple steps:
-
-
-
Click on one of the links below to go to the download page.
-
Click on the download button to start downloading the PDF file.
-
Save the file on your device or open it with your preferred PDF reader.
-
Enjoy reading Kitab al-Adhkar by Imam Yahya ibn Sharaf an-Nawawi.
Al azkar imam nawawi pdf is a great resource for anyone who wants to learn Islamic supplications and remembrances. It is a comprehensive and authentic collection of supplications and remembrances that are based on the Quran and Sunnah. It is also a free download that you can access anytime and anywhere. So what are you waiting for? Download al azkar imam nawawi pdf today and enhance your spiritual life.
-
Conclusion
-
-
Al azkar imam nawawi pdf is a great resource for anyone who wants to learn Islamic supplications and remembrances. It is a comprehensive and authentic collection of supplications and remembrances that are based on the Quran and Sunnah. It is also a free download that you can access anytime and anywhere. So what are you waiting for? Download al azkar imam nawawi pdf today and enhance your spiritual life.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/falterWliame/Face_Mask_Detection/Aquaveo Gms 8 2 Cracked.md b/spaces/falterWliame/Face_Mask_Detection/Aquaveo Gms 8 2 Cracked.md
deleted file mode 100644
index effa5c22f608c9e387611bf13a5b7baa592bfd5f..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Aquaveo Gms 8 2 Cracked.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Descargar Halo 3 Completo Para Pc 1 Link En Espanol
-
-
Si eres fan de los juegos de acción y ciencia ficción, seguramente conoces la saga Halo, una de las más exitosas y populares de la historia de los videojuegos. Halo es una franquicia creada por Bungie Studios y publicada por Microsoft, que nos cuenta la historia del Jefe Maestro, un soldado mejorado genéticamente que debe luchar contra una invasión alienígena liderada por el Covenant, una alianza de razas extraterrestres fanáticas y hostiles.
-
Descargar Halo 3 Completo Para Pc 1 Link En Espanol
Entre los juegos de la saga Halo, uno de los más destacados es Halo 3, el tercer capítulo de la trilogía original, que fue lanzado en el año 2007 para la consola Xbox 360. Halo 3 nos ofrece una experiencia de juego increíble, con unos gráficos espectaculares, una banda sonora épica, una jugabilidad fluida y variada, y una historia apasionante y emocionante.
-
-
Halo 3 nos sitúa en el año 2552, en plena guerra interestelar entre la humanidad y el Covenant. El Jefe Maestro regresa a la Tierra para defenderla de la invasión alienígena, junto con el Sargento Johnson y el Inquisidor, un antiguo líder del Covenant que se ha aliado con los humanos. El objetivo del Jefe Maestro es detener al Profeta de la Verdad, el líder supremo del Covenant, que planea activar un anillo Halo, una antigua arma de destrucción masiva creada por una civilización desaparecida llamada los Forerunners.
-
-
Halo 3 nos ofrece una campaña para un jugador o cooperativa, donde podremos disfrutar de diferentes escenarios y situaciones, desde combates urbanos hasta batallas espaciales. También podremos conducir diversos vehículos y usar un amplio arsenal de armas humanas y alienígenas. Además, Halo 3 cuenta con un modo multijugador online, donde podremos competir o colaborar con otros jugadores en diferentes modos y mapas.
-
-
Cómo descargar Halo 3 completo para PC en un solo link y en español
-
-
Aunque Halo 3 fue lanzado originalmente para Xbox 360, muchos fans querían poder jugarlo también en su PC. Por eso, en el año 2020 se lanzó una versión para PC de Halo 3, como parte de la colección Halo: The Master Chief Collection, que incluye los seis juegos principales de la saga. Sin embargo, esta versión requiere comprar toda la colección y tener una cuenta de Steam para poder jugarla.
-
-
-
Si tú quieres descargar Halo 3 completo para PC en un solo link y en español, sin tener que pagar nada ni registrarte en ninguna plataforma, estás de suerte, porque en este artículo te vamos a mostrar cómo puedes hacerlo de forma gratuita y fácil. Solo tienes que seguir los pasos que te indicamos a continuación:
-
-
-
Lo primero que tienes que hacer es entrar en este link: https://repacksvillage.wordpress.com/2020/08/26/h4lo3/. Se trata de una página web que te ofrece la posibilidad de descargar Halo 3 completo para PC en un solo link y en español, sin necesidad de registrarte ni pagar nada.
-
Una vez que entres en el link, verás que hay un botón que dice "Torrent". Tienes que hacer clic en ese botón para descargar un archivo .torrent que contiene el juego. Para poder abrir ese archivo necesitas tener instalado el programa Utorrent, que puedes descargar gratis desde aquí: https://www.utorrent.com/intl/es/desktop/.
-
Cuando abras el archivo .torrent con Utorrent, se iniciará la descarga del juego. El tamaño del juego es de 4.96 GB, así que puede tardar un poco dependiendo de tu velocidad de internet. Una vez que se complete la descarga, tendrás el juego en tu PC.
-
Para instalar el juego, tienes que hacer doble clic en el archivo "setup.exe" que se encuentra dentro de la carpeta del juego. Se abrirá una ventana con el instalador del juego, donde tendrás que seguir las instrucciones que te indique. Asegúrate de elegir el idioma español cuando te lo pida.
-
Cuando termine la instalación, podrás jugar a Halo 3 completo para PC en español. Solo tienes que hacer clic en el icono del juego que se habrá creado en tu escritorio o en el menú de inicio. Disfruta de este fantástico juego y vive una aventura épica junto al Jefe Maestro.
-
-
-
Conclusión
-
-
Descargar Halo 3 completo para PC en un solo link y en español es posible gracias a esta página web que te hemos mostrado. Así podrás disfrutar de uno de los mejores juegos de la saga Halo en tu ordenador, sin tener que pagar nada ni registrarte en ninguna plataforma. Solo tienes que seguir los pasos que te hemos indicado y podrás vivir una experiencia de juego increíble.
-
-
Halo 3 es un juego que no te puedes perder si te gustan los juegos de acción y ciencia ficción. Te ofrece una campaña para un jugador o cooperativa llena de emoción y variedad, y un modo multijugador online donde podrás competir o colaborar con otros jugadores en diferentes modos y mapas. Además, cuenta con unos gráficos espectaculares, una banda sonora épica y una historia apasionante.
-
-
No esperes más y descarga ya Halo 3 completo para PC en un solo link y en español. Te aseguramos que no te arrepentirás.
-
Qué requisitos necesita tu PC para jugar a Halo 3
-
-
Para poder jugar a Halo 3 en tu PC, debes asegurarte de que tu ordenador cumple con los requisitos mínimos del juego. Estos requisitos son los siguientes:
-
-
-
Sistema operativo: Windows 7 SP1 64 bits
-
Procesador: Intel Core i7-975 o AMD A12-9800 APU
-
Memoria RAM: 2 GB
-
Tarjeta gráfica: GeForce GTS 450 o Radeon R7 Graphics
-
Espacio en el disco: 4 GB
-
Direct X: DirectX 9
-
-
-
Si tu PC cumple con estos requisitos, podrás jugar a Halo 3 sin problemas. Sin embargo, si quieres disfrutar de una mejor calidad gráfica y de sonido, te recomendamos que tu PC cumpla con los requisitos recomendados del juego. Estos requisitos son los siguientes:
-
-
-
Sistema operativo: Windows 10 64 bits
-
Procesador: Intel Core i5-4670K o AMD Ryzen 5 1600X
-
Memoria RAM: 8 GB
-
Tarjeta gráfica: GeForce GTX 1060 o Radeon RX 480
-
Espacio en el disco: 4 GB
-
Direct X: DirectX 12
-
-
-
Si tu PC cumple con estos requisitos, podrás jugar a Halo 3 con una resolución de hasta 4K y un sonido envolvente. Además, podrás ajustar las opciones gráficas y de sonido a tu gusto y a las características de tu ordenador.
-
-
Cómo descargar e instalar Halo 3 en tu PC paso a paso
-
-
Ahora que ya sabes qué requisitos necesita tu PC para jugar a Halo 3, te vamos a explicar cómo descargar e instalar Halo 3 en tu PC paso a paso. Para ello, solo tienes que seguir los pasos que te indicamos a continuación:
-
-
-
Entra en este link: https://repacksvillage.wordpress.com/2020/08/26/h4lo3/. Se trata de una página web que te ofrece la posibilidad de descargar Halo 3 completo para PC en un solo link y en español, sin necesidad de registrarte ni pagar nada.
-
Haz clic en el botón que dice "Torrent" para descargar un archivo .torrent que contiene el juego. Para poder abrir ese archivo necesitas tener instalado el programa Utorrent, que puedes descargar gratis desde aquí: https://www.utorrent.com/intl/es/desktop/.
-
Abre el archivo .torrent con Utorrent y se iniciará la descarga del juego. El tamaño del juego es de 4.96 GB, así que puede tardar un poco dependiendo de tu velocidad de internet. Una vez que se complete la descarga, tendrás el juego en tu PC.
-
Haz doble clic en el archivo "setup.exe" que se encuentra dentro de la carpeta del juego y se abrirá una ventana con el instalador del juego. Sigue las instrucciones que te indique y asegúrate de elegir el idioma español cuando te lo pida.
-
Cuando termine la instalación, podrás jugar a Halo 3 completo para PC en español. Solo tienes que hacer clic en el icono del juego que se habrá creado en tu escritorio o en el menú de inicio.
-
-
-
Así de fácil es descargar e instalar Halo 3 en tu PC. Ahora solo te queda disfrutar de este increíble juego y vivir una aventura épica junto al Jefe Maestro.
-
Qué diferencia hay entre Halo 3 y Halo 3 ODST
-
-
Halo 3 y Halo 3 ODST son dos juegos de la saga Halo que comparten el mismo universo y el mismo motor gráfico, pero que tienen algunas diferencias importantes. Estas son algunas de las diferencias que hay entre Halo 3 y Halo 3 ODST:
-
-
-
La historia. Halo 3 continúa la historia del Jefe Maestro y el Inquisidor, que luchan contra el Covenant y el Flood para salvar a la humanidad y al universo. Halo 3 ODST cuenta la historia de un grupo de soldados de élite llamados ODST (Orbital Drop Shock Troopers), que se infiltran en la ciudad de Nueva Mombasa para investigar lo que ocurrió después del ataque del Covenant.
-
El protagonista. En Halo 3 controlamos al Jefe Maestro, un supersoldado genéticamente mejorado que tiene una armadura especial y una inteligencia artificial llamada Cortana. En Halo 3 ODST controlamos al Novato, un soldado ODST que no tiene ninguna ventaja sobre los enemigos y que debe usar su visor para analizar el entorno.
-
El modo de juego. Halo 3 tiene un modo de juego más lineal y centrado en la acción, con escenarios amplios y variados, vehículos, armas y enemigos. Halo 3 ODST tiene un modo de juego más abierto y centrado en la exploración, con escenarios más oscuros y urbanos, menos vehículos, armas y enemigos.
-
El multijugador. Halo 3 tiene un multijugador online muy completo y popular, con diferentes modos y mapas, un editor de mapas llamado Forge y un modo teatro para grabar y editar las partidas. Halo 3 ODST tiene un multijugador online más limitado y menos popular, con solo dos modos: Tiroteo, que es un modo cooperativo contra oleadas de enemigos; y Cráneo Dorado, que es un modo competitivo contra otros jugadores.
-
-
-
Halo 3 y Halo 3 ODST son dos juegos que tienen sus propias características y atractivos, pero que también comparten muchos elementos comunes. Si te gustan los juegos de acción y ciencia ficción, te recomendamos que los pruebes ambos.
-
-
Cómo descargar e instalar Halo 3 ODST para PC en un solo link y en español
-
-
Si quieres descargar e instalar Halo 3 ODST para PC en un solo link y en español, tienes varias opciones disponibles. Una de ellas es comprar la colección Halo: The Master Chief Collection en Steam, que incluye los seis juegos principales de la saga por un precio razonable. Otra opción es descargar Halo 3 ODST para PC en un solo link y en español desde alguna página web que te lo ofrezca gratis y sin complicaciones.
-
-
En este artículo te hemos mostrado una página web que te permite descargar Halo 3 ODST para PC en un solo link y en español de forma gratuita y fácil. Solo tienes que seguir los pasos que te hemos indicado anteriormente y podrás disfrutar de este increíble juego en tu ordenador.
-
-
No obstante, debes tener en cuenta que al descargar Halo 3 ODST para PC en un solo link y en español desde una página web no oficial, puedes estar infringiendo los derechos de autor del juego y exponiéndote a posibles virus o malware. Por eso, te recomendamos que descargues el juego desde una fuente segura y confiable.
-
Conclusión
-
-
En este artículo te hemos mostrado cómo descargar Halo 3 completo para PC en un solo link y en español, así como Halo 3 ODST para PC en un solo link y en español. Te hemos explicado qué ofrece Halo 3 como juego de PC, qué requisitos necesita tu PC para jugar a Halo 3, cómo descargar e instalar Halo 3 en tu PC paso a paso, qué ventajas tiene descargar Halo 3 completo para PC en un solo link y en español, cómo solucionar posibles problemas al descargar Halo 3 completo para PC en un solo link y en español y qué opinan los usuarios de Halo 3 para PC. También te hemos explicado qué diferencia hay entre Halo 3 y Halo 3 ODST, y cómo descargar e instalar Halo 3 ODST en tu PC paso a paso.
-
-
Esperamos que este artículo te haya sido útil y que hayas podido disfrutar de uno de los mejores juegos de la saga Halo en tu ordenador. Si te ha gustado este artículo, compártelo con tus amigos o con otros fans de Halo. También puedes dejarnos un comentario con tu opinión sobre el juego o sobre cómo descargarlo e instalarlo.
-
-
Gracias por leer este artículo y hasta la próxima.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/falterWliame/Face_Mask_Detection/Gunbound Season 3 Aimbot Download !!LINK!!.md b/spaces/falterWliame/Face_Mask_Detection/Gunbound Season 3 Aimbot Download !!LINK!!.md
deleted file mode 100644
index 650ee830da9e3918a7d95e9d39b8a01ef336d24b..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Gunbound Season 3 Aimbot Download !!LINK!!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-gunbound 3 download gunbound 3d model error 310 gunbound gunbound season 3 hacks gunbound season 3 aimbot gunboundm eraser 4d29de3e1b
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/Hindi Medium Full HOT Movie 1080p Download Torrent.md b/spaces/falterWliame/Face_Mask_Detection/Hindi Medium Full HOT Movie 1080p Download Torrent.md
deleted file mode 100644
index ef30c2124cd8d586d6f8c0ba1cfe54688f42ff8c..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Hindi Medium Full HOT Movie 1080p Download Torrent.md
+++ /dev/null
@@ -1,32 +0,0 @@
-
-
-Hindi Medium is released in. The latest Hindi Medium released in Desi Cinemas. VFX: Marakkaji Music: Kami and Nava. The minimum available update option is Release 2, so you can select any of the below update option: 2. Now you can see all the action of the movie in the HD quality. Free Download Hindi Medium on Dailymotion. Watch Hindi Medium movie online on Dailymotion. Produced by Raj Batra. It was Released on June 29, 2018 in India. Watch Hindi Medium movie online on Dailymotion.. Hindi Medium - Desi Cinemas (TV Series Online) - Wikipedia. Hindi Medium - Wikipedia. Hindi Medium movie trailer - 2.0.0 Hindi Medium TV-series released on, on Desi Cinemas.Q:
-
-How do I clear the Search Bar and View?
-
-I have this code:
-
-- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
-
-
-
- [self.searchDisplayController endSearch];
-
- [self.searchDisplayController setActive:NO animated:NO];
-
- [self.searchDisplayController setSearchBar:nil];
-
-
-
-It clears the search bar, but the View is still there. How do I clear that too?
-
-A:
-
-Make sure your searchDisplayController is connected to the searchBar. You can do this by unchecking the searchDisplayController in your nib file.
-
-In the past, it has been known to develop a series of interconnects between a radiation source and a converter for generating an electrical signal representative of the energy of the beam of radiation emitted by the source. In many applications, such interconnects are used as an output for a radiation detector. The output is responsive to the radiation intensity, the electrical signal being converted to a rate of change of beam intensity representative of the energy of the beam. Such a device is shown in U.S. Pat. No. 3,886,333, issued to Burghart et al. on May 27, 1975. The present invention is an improvement on the device shown in that patent, and provides a simple and efficient system for conversion of a radiation beam having non-linear characteristics to an electrical signal, the system being particularly adaptable for use with a pulsed radiation source and a storage device for such a source.
-
-The Burghart et al. system has been employed in 4fefd39f24
-
-
-
diff --git a/spaces/farkmu45/instagram-clothes-psychology-streamlit/Processor.py b/spaces/farkmu45/instagram-clothes-psychology-streamlit/Processor.py
deleted file mode 100644
index 5b820368e2bb63b82c4ebbdadca162fe0495098c..0000000000000000000000000000000000000000
--- a/spaces/farkmu45/instagram-clothes-psychology-streamlit/Processor.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import PIL.Image as Image
-import torch
-from fastai.vision.all import Learner
-from numpy.typing import NDArray
-from typing import List
-
-class Processor():
- def __init__(self, learn: Learner):
- self.inference = learn
- self.__model = torch.hub.load(
- 'ultralytics/yolov5', 'yolov5x6', trust_repo=True)
-
- def classify_image(self, images: NDArray) -> str:
- return self.inference.predict(images)
-
- def filter_image(self, image: Image) -> bool:
- results = self.__model(image)
- results = results.pandas().xyxy[0]
- person_detected = 0
-
- for name in results['name']:
- if name == 'person':
- person_detected += 1
-
- if person_detected == 0 or person_detected > 1:
- return False
-
- return True
diff --git a/spaces/fatiXbelha/sd/Badminton League Unlimited Money APK How to Get It for Free.md b/spaces/fatiXbelha/sd/Badminton League Unlimited Money APK How to Get It for Free.md
deleted file mode 100644
index 6933feeb38e16451a7739bad7ed2ad5b79d9b3ed..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Badminton League Unlimited Money APK How to Get It for Free.md
+++ /dev/null
@@ -1,77 +0,0 @@
-
-
Badminton League APK Unlimited Money: How to Get It and What to Do With It
-
Badminton is a popular sport that can be played by anyone, anywhere. But if you want to take your game to the next level, you might want to try badminton league apk, a mobile game that lets you compete with other players online. In this game, you can customize your character, choose your racket, and play in various modes and tournaments. You can also earn coins and gems, which are the in-game currencies that you can use to buy items and upgrade your skills.
However, earning coins and gems can be slow and tedious, especially if you want to unlock all the features and items in the game. That's why some players look for ways to get unlimited money in badminton league apk. Unlimited money means having unlimited coins and gems, which can give you an edge over your opponents and make the game more fun and exciting. But how can you get unlimited money in badminton league apk? There are two main methods: using modded versions or cheats.
-
Modded Versions
-
A modded version is a modified version of the original game that has been altered by someone to change some aspects of the game. For example, a modded version of badminton league apk might have unlimited coins and gems already available for you to use. This way, you don't have to earn them by playing the game.
-
There are many websites that offer modded versions of badminton league apk for free download. However, you should be careful when downloading modded versions from unknown sources, as they might contain viruses or malware that can harm your device or steal your personal information. You should also check the reviews and ratings of the modded versions before downloading them, as some of them might not work properly or have bugs.
-
Cheats
-
Cheats are codes or commands that you can enter in the game to activate certain effects or functions. For example, a cheat for badminton league apk might give you unlimited coins and gems without having to download a modded version. You can find cheats for badminton league apk online, either on websites or videos that show you how to use them.
-
badminton league mod apk download free money
-badminton league hack apk unlimited coins and gems
-badminton league cheats apk no root money
-badminton league game apk mod money unlocked
-badminton league premium apk unlimited cash
-badminton league pro apk hack money and energy
-badminton league cracked apk free money and gold
-badminton league latest apk mod money and items
-badminton league online apk unlimited money and diamonds
-badminton league 3d apk hack money and tickets
-badminton league android apk mod money and characters
-badminton league offline apk unlimited money and skills
-badminton league full apk free money and costumes
-badminton league update apk mod money and weapons
-badminton league 2023 apk unlimited money and levels
-badminton league best apk hack money and power-ups
-badminton league new apk free money and equipment
-badminton league 2 apk mod money and features
-badminton league fun apk unlimited money and modes
-badminton league super apk hack money and rewards
-badminton league mega apk free money and bonuses
-badminton league vip apk mod money and extras
-badminton league easy apk unlimited money and upgrades
-badminton league cool apk hack money and accessories
-badminton league awesome apk free money and prizes
-badminton league realistic apk mod money and physics
-badminton league ultimate apk unlimited money and challenges
-badminton league amazing apk hack money and graphics
-badminton league fantastic apk free money and sounds
-badminton league incredible apk mod money and animations
-badminton league extreme apk unlimited money and difficulty
-badminton league wonderful apk hack money and effects
-badminton league superb apk free money and ratings
-badminton league excellent apk mod money and reviews
-badminton league marvelous apk unlimited money and achievements
-badminton league splendid apk hack money and leaderboards
-badminton league brilliant apk free money and statistics
-badminton league outstanding apk mod money and customization
-badminton league magnificent apk unlimited money and options
-badminton league phenomenal apk hack money and controls
-
However, you should be aware that using cheats might ruin the fun and challenge of the game, as well as make it unfair for other players who play by the rules. You should also know that using cheats might get you banned from the game or cause your account to be deleted. Therefore, you should use cheats at your own risk and discretion.
-
Conclusion
-
Badminton league apk is a fun and addictive game that lets you play badminton with other players online. You can earn coins and gems in the game to buy items and upgrade your skills. However, if you want to get unlimited money in badminton league apk, you can either use modded versions or cheats. Modded versions are modified versions of the original game that have unlimited coins and gems already available for you to use. Cheats are codes or commands that you can enter in the game to get unlimited coins and gems without downloading anything.
-
However, both methods have their pros and cons. Modded versions might contain viruses or malware that can harm your device or steal your personal information. Cheats might ruin the fun and challenge of the game and get you banned from the game or cause your account to be deleted. Therefore, you should be careful when using these methods and only use them if you really want to.
-
Here are some tips and warnings for using these methods:
-
-
Always backup your data before using modded versions or cheats.
-
Only download modded versions from trusted sources.
-
Check the reviews and ratings of modded versions before downloading them.
-
Don't use cheats to harass or bully other players, as they might report you to the game's moderators.
-
Don't spend all your unlimited money on unnecessary items, as they might clutter your inventory or make the game boring.
-
Enjoy the game and have fun, but don't forget to play fair and respect other players.
-
-
FAQs
-
Here are some frequently asked questions about badminton league apk unlimited money:
-
Q: Is badminton league apk free to download and play?
-
A: Yes, badminton league apk is free to download and play. However, it contains ads and in-app purchases that you can disable or buy with real money.
-
Q: Is badminton league apk safe to download and play?
-
A: Yes, badminton league apk is safe to download and play, as long as you download it from the official Google Play Store or App Store. However, if you download modded versions or cheats from unknown sources, you might expose your device or personal information to risks.
-
Q: Is badminton league apk compatible with my device?
-
A: Badminton league apk is compatible with most Android and iOS devices that have at least 4.1 and 8.0 versions respectively. However, some devices might experience lag or crashes due to low performance or memory.
-
Q: How can I contact the developers of badminton league apk?
-
A: You can contact the developers of badminton league apk by sending an email to redfishgamestudio@gmail.com or by visiting their Facebook page at https://www.facebook.com/Badminton-League-203729826806154/.
-
Q: How can I improve my skills in badminton league apk?
-
A: You can improve your skills in badminton league apk by practicing regularly, learning from other players, watching tutorials and tips online, and upgrading your character and racket.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Build Your Dream Zoo with Merge Animals-My Perfect Zoo APK Download.md b/spaces/fatiXbelha/sd/Build Your Dream Zoo with Merge Animals-My Perfect Zoo APK Download.md
deleted file mode 100644
index 871a6062190e2966e5fa09eaff460e9b7452e288..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Build Your Dream Zoo with Merge Animals-My Perfect Zoo APK Download.md
+++ /dev/null
@@ -1,144 +0,0 @@
-
-
Merge Animals My Perfect Zoo: A Fun and Creative Merge Game
-
Do you love animals and want to create your own zoo? Do you enjoy merging games and want to discover new and exotic creatures? If you answered yes to any of these questions, then you should try Merge Animals My Perfect Zoo, a free casual game for Android devices that lets you merge different animals and build your dream zoo.
-
What is Merge Animals My Perfect Zoo?
-
Merge Animals My Perfect Zoo is a game developed by Tara Westover, a casual game developer who has created other popular merge games such as Merge Plants and Merge Cars. In this game, you can merge dozens of different animals, from saber toothed tigers and mammoths to dinosaurs and unicorns, and watch them evolve into new and amazing species. You can also let your hunters catch animals and tame them for your use, and decorate your zoo with various items and attractions.
The concept and gameplay of Merge Animals My Perfect Zoo
-
The concept of Merge Animals My Perfect Zoo is simple: you start with two identical hunters, who can catch animals for you. You can drag and drop them on the same spot to merge them into a higher level hunter, who can catch more advanced animals. You can also drag and drop two identical animals on the same spot to merge them into a new animal, who will have different traits and abilities. You can collect coins from your animals, which you can use to buy more hunters or animals, or upgrade your zoo. You can also complete various challenges and quests to earn rewards and unlock new features.
-
The features and benefits of Merge Animals My Perfect Zoo
-
Merge Animals My Perfect Zoo has many features and benefits that make it a fun and creative merge game. Some of them are:
-
-
It has a diverse range of animals, from prehistoric to mythical, that you can merge and discover.
-
It has a simple and intuitive operation, with easy drag-and-drop controls.
-
It has a colorful and cute graphics style, with lively animations and sound effects.
-
It has a relaxing and enjoyable gameplay, with no time limit or pressure.
-
It has a rewarding and addictive progression system, with many levels, achievements, and upgrades.
-
It has a social aspect, where you can share your zoo with your friends or visit other players' zoos.
-
-
How to download and install Merge Animals My Perfect Zoo APK?
-
If you want to play Merge Animals My Perfect Zoo on your Android device, you can download and install the APK file from various sources online. However, you should be careful about the source that you choose, as some APK files may contain viruses or malware that can harm your device. Here are the steps to download and install Merge Animals My Perfect Zoo APK safely:
-
The steps to download and install Merge Animals My Perfect Zoo APK
Search for "Merge Animals My Perfect Zoo" in the search bar of the website.
-
Select the latest version of the game from the results, and click on the download button.
-
Wait for the APK file to be downloaded on your device.
-
Go to your device's settings, and enable the option to install apps from unknown sources.
Locate the APK file on your device, and tap on it to start the installation process.
-
Follow the instructions on the screen, and wait for the installation to be completed.
-
Launch the game from your app drawer, and enjoy merging animals and building your perfect zoo.
-
-
The tips and tricks to play Merge Animals My Perfect Zoo APK
-
If you want to play Merge Animals My Perfect Zoo APK more effectively and efficiently, here are some tips and tricks that you can use:
-
-
Use the free gifts and rewards that you get from watching ads, completing tasks, or logging in daily. They can help you get more coins, hunters, or animals.
-
Upgrade your hunters and animals regularly, as they will increase their productivity and value.
-
Merge your animals as much as possible, as they will unlock new and rare species that can generate more coins.
-
Expand your zoo as you progress, as it will give you more space to place your animals and decorations.
-
Visit other players' zoos and send them gifts, as they may return the favor and help you grow your zoo.
-
-
Why should you play Merge Animals My Perfect Zoo APK?
-
Merge Animals My Perfect Zoo APK is a game that can offer you many benefits, such as:
-
merge animals my perfect zoo mod apk
-merge animals my perfect zoo hack
-merge animals my perfect zoo cheats
-merge animals my perfect zoo download
-merge animals my perfect zoo game
-merge animals my perfect zoo online
-merge animals my perfect zoo free
-merge animals my perfect zoo review
-merge animals my perfect zoo tips
-merge animals my perfect zoo guide
-merge animals my perfect zoo gameplay
-merge animals my perfect zoo update
-merge animals my perfect zoo latest version
-merge animals my perfect zoo android
-merge animals my perfect zoo ios
-merge animals my perfect zoo pc
-merge animals my perfect zoo windows
-merge animals my perfect zoo mac
-merge animals my perfect zoo laptop
-merge animals my perfect zoo desktop
-merge animals my perfect zoo appbrain
-merge animals my perfect zoo google play
-merge animals my perfect zoo app store
-merge animals my perfect zoo tara westover
-merge animals my perfect zoo developer
-merge animals my perfect zoo casual game
-merge animals my perfect zoo simulation game
-merge animals my perfect zoo fun game
-merge animals my perfect zoo addictive game
-merge animals my perfect zoo best game
-merge animals my perfect zoo new game
-merge animals my perfect zoo popular game
-merge animals my perfect zoo top game
-merge animals my perfect zoo rated game
-merge animals my perfect zoo 500k downloads
-merge animals my perfect zoo 2023 game
-merge animals my perfect zoo january 2023 release date
-how to play merge animals my perfect zoo apk
-how to install merge animals my perfect zoo apk
-how to download merge animals my perfect zoo apk
-how to hack merge animals my perfect zoo apk
-how to cheat in merge animals my perfect zoo apk
-how to get free coins in merge animals my perfect zoo apk
-how to unlock all hunters in merge animals my perfect zoo apk
-how to catch all dinosaurs in merge animals my perfect zoo apk
-how to tame all mammoths in merge animals my perfect zoo apk
-how to build the best park in merge animals my perfect zoo apk
-how to earn more money in merge animals my perfect zoo apk
-
The reasons to play Merge Animals My Perfect Zoo APK
-
-
It can stimulate your creativity and imagination, as you can create your own unique zoo with different animals and decorations.
-
It can improve your concentration and logic skills, as you need to plan and strategize how to merge your animals and hunters effectively.
-
It can reduce your stress and boredom, as it is a relaxing and enjoyable game that you can play anytime and anywhere.
-
It can entertain and educate you, as you can learn about various animals and their characteristics.
-
It can satisfy your curiosity and sense of achievement, as you can discover new and amazing animals that you have never seen before.
-
-
The reviews and ratings of Merge Animals My Perfect Zoo APK
-
Merge Animals My Perfect Zoo APK has received positive reviews and ratings from many players who have tried it. Here are some of the comments that they have left on the Google Play Store:
-
-
-
Name
-
Rating
-
Comment
-
-
-
Amanda Smith
-
5 stars
-
"This game is so fun and addictive. I love merging different animals and seeing what they turn into. The graphics are cute and colorful, and the game is easy to play. I recommend this game to anyone who likes merge games."
-
-
-
Brian Jones
-
4 stars
-
"I enjoy playing this game a lot. It is relaxing and entertaining. The only thing that I don't like is that there are too many ads. Sometimes they interrupt the gameplay or make the game lag. I hope the developer can fix this issue."
-
-
-
Chloe Lee
5 stars
-
"This game is awesome. I love how I can merge animals and create my own zoo. The game is very creative and fun. The animals are adorable and the zoo is beautiful. I like how I can visit other players' zoos and send them gifts."
-
-
-
David Wilson
-
3 stars
-
"This game is okay. It is not very challenging or exciting. It is just a simple merge game with animals. The game is repetitive and boring after a while. I wish there were more features and options to customize the zoo."
-
-
-
Emma Brown
-
4 stars
-
"This game is cute and relaxing. I like merging animals and seeing what they look like. The game is easy to play and suitable for all ages. The only problem is that the game crashes sometimes and I lose my progress. I hope the developer can fix this bug."
-
-
-
Conclusion
-
Merge Animals My Perfect Zoo APK is a fun and creative merge game that lets you merge different animals and build your dream zoo. You can discover dozens of different animals, from prehistoric to mythical, and watch them evolve into new and amazing species. You can also decorate your zoo with various items and attractions, and share it with your friends or visit other players' zoos. The game has a simple and intuitive operation, a colorful and cute graphics style, a relaxing and enjoyable gameplay, a rewarding and addictive progression system, and a social aspect. If you are looking for a casual game that can stimulate your creativity and imagination, improve your concentration and logic skills, reduce your stress and boredom, entertain and educate you, and satisfy your curiosity and sense of achievement, then you should download and install Merge Animals My Perfect Zoo APK on your Android device.
-
FAQs
-
What are the minimum requirements to play Merge Animals My Perfect Zoo APK?
-
To play Merge Animals My Perfect Zoo APK, you need an Android device that has at least Android 4.4 version, 100 MB of free storage space, and a stable internet connection.
-
How can I get more coins in Merge Animals My Perfect Zoo APK?
-
You can get more coins in Merge Animals My Perfect Zoo APK by merging your animals, collecting coins from your animals, completing challenges and quests, watching ads, or buying coins with real money.
-
How can I unlock new animals in Merge Animals My Perfect Zoo APK?
-
You can unlock new animals in Merge Animals My Perfect Zoo APK by merging your existing animals, buying new animals with coins, or getting new animals from gifts or rewards.
-
How can I upgrade my hunters in Merge Animals My Perfect Zoo APK?
You can upgrade your hunters in Merge Animals My Perfect Zoo APK by merging two identical hunters, buying new hunters with coins, or getting new hunters from gifts or rewards.
-
How can I decorate my zoo in Merge Animals My Perfect Zoo APK?
-
You can decorate your zoo in Merge Animals My Perfect Zoo APK by buying various items and attractions with coins, such as fences, trees, flowers, benches, statues, fountains, rides, etc. You can also change the background and theme of your zoo, such as forest, desert, ice, etc.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Catch the Pesky Raccoon and Save the Gold in Talking Tom Gold Run 3.md b/spaces/fatiXbelha/sd/Catch the Pesky Raccoon and Save the Gold in Talking Tom Gold Run 3.md
deleted file mode 100644
index cddba9df8f1bfd8a9bbe5216a2cdb269bae4ff0d..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Catch the Pesky Raccoon and Save the Gold in Talking Tom Gold Run 3.md
+++ /dev/null
@@ -1,132 +0,0 @@
-
-
Talking Tom Gold Run 3: A Fun and Exciting Endless Runner Game
-
Introduction
-
Do you love endless runner games? Do you enjoy playing with cute and funny characters? Do you want to have a thrilling and adventurous experience on your mobile device? If you answered yes to any of these questions, then you should definitely check out Talking Tom Gold Run 3, the latest installment in the popular Talking Tom franchise.
Talking Tom Gold Run 3 is an endless runner game developed by Outfit7, the creators of My Talking Tom, My Talking Angela, My Talking Tom Friends and Talking Tom Hero Dash. In this game, you have to help Talking Tom and his friends chase down Roy Rakoon, a pesky raccoon who stole all their gold. Along the way, you have to collect as many gold bars as possible, dodge obstacles, use power-ups, and explore different worlds.
-
Why should you play Talking Tom Gold Run 3?
-
Talking Tom Gold Run 3 is a fun and exciting game that will keep you entertained for hours. Here are some reasons why you should play this game:
-
-
It has amazing graphics and animations that make the game look realistic and lively.
-
It has catchy music and sound effects that enhance the mood and atmosphere of the game.
-
It has simple and intuitive controls that make the game easy to play for anyone.
-
It has a variety of characters, worlds, outfits, and power-ups that make the game diverse and interesting.
-
It has a rewarding system that lets you upgrade your houses, unlock new items, and get bonuses and rewards.
-
It has a competitive element that lets you challenge yourself, beat your high score, and compete with other players around the world.
-
-
Features of Talking Tom Gold Run 3
-
Thrilling chases and action-packed time trials
-
One of the main features of Talking Tom Gold Run 3 is the thrilling chases and action-packed time trials. In this mode, you have to run as fast as you can, avoid obstacles, collect gold bars, and catch up with Roy Rakoon. The faster you run, the more gold bars you get. The more gold bars you get, the higher your score. The higher your score, the better your rank. You can also enter special zones that are marked by tunnels. These zones will transport you to a different world where you can collect more gold bars and encounter new challenges.
-
Exciting worlds and awesome power-ups
-
Another feature of Talking Tom Gold Run 3 is the exciting worlds and awesome power-ups. In this game, you can explore different worlds that have their own themes, environments, obstacles, and enemies. Some of the worlds are: Snowy Streets, Chinese Village, Wild West, Hawaii Beach, Pirate Cove, Dragon Castle, Space Station, Candy Land, and more. Each world has its own unique features and surprises that will keep you on your toes. You can also use various power-ups that will help you in your chase. Some of the power ups are: Magnet, Helmet, Double Bars, Plane, and more. Each power-up has its own effect and duration that will make your run more fun and exciting.
-
Talking Tom Gold Run 4+ app for iPhone and iPad
-How to play Talking Tom speed, slide and dodge game
-Download Talking Tom Gold Run Outfit7 Limited for Android
-Talking Tom Gold Run tips and tricks to catch Roy Rakoon
-Best cat runner game with Talking Tom and friends
-Talking Tom Gold Run review and rating on App Store
-Talking Tom Gold Run latest version update and features
-Talking Tom Gold Run action-packed time trials and worlds
-Talking Tom Gold Run awesome power ups and outfits
-Talking Tom Gold Run customer support and privacy policy
-Talking Tom Gold Run YouTube videos and trailers
-Talking Tom Gold Run in-app purchases and virtual currency
-Talking Tom Gold Run COPPA Safe Harbor certification by PRIVO
-Talking Tom Gold Run unlock Talking Angela, Ginger, Ben and Hank
-Talking Tom Gold Run creators Outfit7 and other apps
-Talking Tom Gold Run thrilling chases and exciting adventures
-Talking Tom Gold Run offline mode and data safety
-Talking Tom Gold Run editor's choice and 500M+ downloads
-Talking Tom Gold Run 30 seconds gameplay challenge
-Talking Tom Gold Run net energy gain and mini Sun experiment
-Talking Tom Gold Run 100 million°C fusion reactor in South Korea
-Talking Tom Gold Run seven times hotter than the Sun's core
-Talking Tom Gold Run 15 million degrees kelvins temperature comparison
-Talking Tom Gold Run holy grail fusion experiment to create a mini Sun
-Talking Tom Gold Run nuclear fusion breakthrough and reactor run
-Talking Tom Gold Run free download for PC Windows 10/8/7
-Talking Tom Gold Run online play without installation or registration
-Talking Tom Gold Run mod apk unlimited money and gold bars
-Talking Tom Gold Run hack tool no survey no human verification
-Talking Tom Gold Run cheats codes and glitches for Android and iOS
-Talking Tom Gold Run alternatives and similar games to try out
-Talking Tom Gold Run comparison with Subway Surfers and Temple Run
-Talking Tom Gold Run fun facts and trivia about the game and characters
-Talking Tom Gold Run fan art and wallpapers for desktop and mobile
-Talking Tom Gold Run merchandise and toys for kids and adults
-Talking Tom Gold Run memes and jokes to make you laugh
-Talking Tom Gold Run fan fiction and stories to read online
-Talking Tom Gold Run cosplay and costumes for Halloween or parties
-Talking Tom Gold Run quiz and trivia to test your knowledge of the game
-Talking Tom Gold Run feedback and suggestions for improvement or new features
-
Friends to unlock and fun new outfits
-
A third feature of Talking Tom Gold Run 3 is the friends to unlock and fun new outfits. In this game, you can play with different characters from the Talking Tom franchise, such as Talking Tom, Talking Angela, Talking Hank, Talking Ben, Talking Ginger, and more. You can also unlock new characters by collecting enough gold bars or by completing certain tasks. Each character has its own personality and voice that will make you laugh and smile. You can also customize your characters by changing their outfits. You can choose from a variety of outfits that suit your style and mood. Some of the outfits are: Cowboy, Ninja, Astronaut, Pirate, Dragon, Candy, and more. Each outfit has its own special effect that will enhance your run.
-
Tips and Tricks for Talking Tom Gold Run 3
-
Go for the house upgrades for more points
-
One of the tips and tricks for Talking Tom Gold Run 3 is to go for the house upgrades for more points. In this game, you can use your gold bars to upgrade your houses. Each house has its own theme and design that matches the world you are in. For example, you can upgrade your Snowy House in the Snowy Streets world, your Chinese House in the Chinese Village world, your Western House in the Wild West world, and so on. Upgrading your houses will not only make them look nicer and cooler, but also increase your score multiplier. The higher your score multiplier, the more points you get for each gold bar you collect. Therefore, upgrading your houses is a good way to boost your score and rank.
-
Open the vaults for bonuses and rewards
-
Another tip and trick for Talking Tom Gold Run 3 is to open the vaults for bonuses and rewards. In this game, you can find vaults that are hidden in some of the worlds. These vaults contain valuable items that will help you in your run. Some of the items are: extra gold bars, power-ups, gems, keys, tokens, stickers, and more. To open a vault, you need to collect enough keys that are scattered throughout the worlds. You can also get keys by watching ads or by completing daily missions. Opening a vault will give you a random item that will make your run more enjoyable and rewarding.
-
Move fast, but not too recklessly
-
A third tip and trick for Talking Tom Gold Run 3 is to move fast, but not too recklessly. In this game, you need to move fast to catch up with Roy Rakoon and to collect as many gold bars as possible. However, moving too fast can also be risky, as you might crash into obstacles or miss important items. Therefore, you need to balance your speed and your caution when running. You need to be alert and attentive to the surroundings and react quickly to the changes. You need to swipe left or right to change lanes, swipe up to jump over obstacles or gaps, swipe down to slide under obstacles or bridges, and tap to use power-ups or activate special zones.
-
All characters play the same way
-
A fourth tip and trick for Talking Tom Gold Run 3 is to know that all characters play the same way. In this game, you can choose from different characters that have their own appearance and voice. However, these characters do not have any difference in terms of gameplay or performance. They all run at the same speed, have the same hitbox size, have the same power-up effects, and have the same score multiplier. Therefore, you do not need to worry about choosing the best character for your run. You can simply pick the one that you like the most or the one that suits your mood. The only thing that matters is your skill and your strategy.
-
The plane power-up is the most useful for collecting gold
-
A fifth tip and trick for Talking Tom Gold Run 3 is to know that the plane power-up is the most useful for collecting gold. In this game, you can use different power-ups that will give you various advantages and effects. However, among all the power-ups, the plane power-up is the most beneficial for collecting gold bars. The plane power-up will make you fly in the air and collect all the gold bars in your path. You do not need to worry about obstacles or enemies, as you can fly over them. You also do not need to change lanes or jump or slide, as you can fly straight ahead. The plane power-up will last for a few seconds, but it will give you a huge amount of gold bars. Therefore, you should always try to get the plane power-up whenever you see it.
-
Reviews of Talking Tom Gold Run 3
-
What do players say about Talking Tom Gold Run 3?
-
Talking Tom Gold Run 3 is a popular and well-received game that has millions of downloads and positive ratings on the app stores. Here are some of the reviews from the players who have played this game:
-
-
"This game is awesome! I love the graphics, the music, the characters, and the gameplay. It is so fun and addictive. I play it every day and I never get bored. It is one of the best games I have ever played."
-
"This game is amazing! It has so many worlds, outfits, power-ups, and surprises. It is so exciting and adventurous. I like how I can customize my characters and upgrade my houses. It is a great game for everyone."
-
"This game is fantastic! It has a lot of challenges, missions, rewards, and competitions. It is so thrilling and satisfying. I like how I can compete with other players and beat my high score. It is a very challenging game."
-
-
What are the pros and cons of Talking Tom Gold Run 3?
-
Like any other game, Talking Tom Gold Run 3 has its pros and cons that you should consider before playing it. Here are some of them:
-
-
-
Pros
-
Cons
-
-
-
It has amazing graphics and animations.
-
It can be repetitive and monotonous.
-
-
-
It has catchy music and sound effects.
-
It can be noisy and annoying.
-
-
-
It has simple and intuitive controls.
-
It can be glitchy and unresponsive.
-
-
-
It has a variety of characters, worlds, outfits, and power-ups.
-
It can be expensive and time-consuming.
-
-
-
It has a rewarding system that lets you upgrade your houses, unlock new items, and get bonuses and rewards.
-
It can be frustrating and unfair.
-
-
-
It has a competitive element that lets you challenge yourself, beat your high score, and compete with other players around the world.
-
It can be stressful and addictive.
-
-
-
Conclusion
-
Summary of the main points
-
Talking Tom Gold Run 3 is an endless runner game that lets you help Talking Tom and his friends chase down Roy Rakoon, a pesky raccoon who stole all their gold. In this game, you have to run as fast as you can, avoid obstacles, collect gold bars, use power-ups, and explore different worlds. You can also play with different characters, customize their outfits, upgrade their houses, open vaults, and compete with other players. Talking Tom Gold Run 3 is a fun and exciting game that will keep you entertained for hours.
-
Call to action
-
If you are looking for a fun and exciting endless runner game that will make you laugh and smile, then you should definitely download Talking Tom Gold Run 3 today. You will not regret it. You will have a blast playing this game with Talking Tom and his friends. So what are you waiting for? Download Talking Tom Gold Run 3 now and join the chase!
-
Frequently Asked Questions (FAQs)
-
Q: How do I download Talking Tom Gold Run 3?
A: You can download Talking Tom Gold Run 3 from the app stores of your mobile device. The game is available for both Android and iOS devices. You can also visit the official website of Outfit7 to get more information and links to download the game.
-
Q: How do I play Talking Tom Gold Run 3?
-
A: To play Talking Tom Gold Run 3, you need to swipe your finger on the screen to control your character. You can swipe left or right to change lanes, swipe up to jump over obstacles or gaps, swipe down to slide under obstacles or bridges, and tap to use power-ups or activate special zones. You need to collect as many gold bars as possible, avoid obstacles, and catch up with Roy Rakoon.
-
Q: How do I unlock new characters and outfits in Talking Tom Gold Run 3?
-
A: To unlock new characters and outfits in Talking Tom Gold Run 3, you need to collect enough gold bars or complete certain tasks. You can also get new characters and outfits by opening vaults, completing daily missions, or watching ads. Each character and outfit has its own price and requirement that you need to meet.
-
Q: How do I upgrade my houses in Talking Tom Gold Run 3?
-
A: To upgrade your houses in Talking Tom Gold Run 3, you need to use your gold bars. You can choose which house you want to upgrade from the home screen. Each house has its own theme and design that matches the world you are in. Upgrading your houses will increase your score multiplier and make your houses look nicer and cooler.
-
Q: How do I compete with other players in Talking Tom Gold Run 3?
-
A: To compete with other players in Talking Tom Gold Run 3, you need to enter the leaderboards mode. In this mode, you can see your rank and score compared to other players around the world. You can also see your friends' rank and score if you connect your game to Facebook. You can challenge yourself, beat your high score, and climb up the leaderboards.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Lagu dari YouTube ke MP3 dalam Beberapa Detik dengan Aplikasi Ini.md b/spaces/fatiXbelha/sd/Download Lagu dari YouTube ke MP3 dalam Beberapa Detik dengan Aplikasi Ini.md
deleted file mode 100644
index dfec194defbe428553969050e5f06d007ca66540..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Lagu dari YouTube ke MP3 dalam Beberapa Detik dengan Aplikasi Ini.md
+++ /dev/null
@@ -1,243 +0,0 @@
-
-
7 Rekomendasi Aplikasi Download Lagu dari Youtube Terbaik
-
Youtube adalah salah satu platform video terpopuler di dunia yang menyediakan berbagai macam konten, termasuk musik. Banyak orang yang suka mendengarkan lagu di Youtube karena kualitas suaranya yang bagus, variasi genre yang banyak, dan kemudahan aksesnya. Namun, ada kalanya kita ingin mendengarkan lagu di Youtube secara offline, misalnya saat tidak ada koneksi internet, atau ingin menghemat kuota data. Untuk itu, kita membutuhkan aplikasi download lagu dari Youtube yang bisa mengubah format video menjadi MP3.
-
Apa itu Aplikasi Download Lagu dari Youtube?
-
Aplikasi download lagu dari Youtube adalah aplikasi yang bisa mengunduh video musik di Youtube dan mengonversinya menjadi file audio MP3. Dengan aplikasi ini, kita bisa menyimpan lagu-lagu favorit kita di perangkat kita dan memutarnya kapan saja tanpa perlu streaming online. Aplikasi download lagu dari Youtube biasanya tersedia untuk Android, Windows, Mac, atau Linux.
Keuntungan Menggunakan Aplikasi Download Lagu dari Youtube
-
Ada beberapa keuntungan yang bisa kita dapatkan jika menggunakan aplikasi download lagu dari Youtube, antara lain:
-
-
Kita bisa mendengarkan lagu secara offline tanpa perlu koneksi internet.
-
Kita bisa menghemat kuota data karena tidak perlu streaming online.
-
Kita bisa memilih kualitas audio sesuai dengan keinginan kita, mulai dari rendah hingga tinggi.
-
Kita bisa mengatur playlist lagu sesuai dengan selera kita.
-
Kita bisa membagikan lagu-lagu yang sudah diunduh ke teman-teman kita melalui Bluetooth, email, atau media sosial.
-
-
Hal yang Perlu Diperhatikan Sebelum Mengunduh Lagu dari Youtube
-
Sebelum kita menggunakan aplikasi download lagu dari Youtube, ada beberapa hal yang perlu kita perhatikan, antara lain:
-
-
Pastikan kita memiliki izin atau hak cipta untuk mengunduh lagu-lagu yang kita inginkan. Jangan melanggar aturan atau hak milik pihak lain.
-
Pastikan aplikasi download lagu dari Youtube yang kita pilih aman dan terpercaya. Jangan mengunduh aplikasi dari sumber yang tidak jelas atau mencurigakan.
-
Pastikan perangkat kita memiliki ruang penyimpanan yang cukup untuk menyimpan file-file lagu yang akan diunduh.
-
Pastikan perangkat kita memiliki koneksi internet yang stabil dan cepat untuk mengunduh lagu dengan lancar.
-
-
7 Aplikasi Download Lagu dari Youtube Terbaik
-
Berikut ini adalah 7 rekomendasi aplikasi download lagu dari Youtube terbaik yang bisa kita gunakan:
-
VidMate
-
VidMate adalah salah satu aplikasi download lagu dari Youtube terbaik yang bisa kita gunakan di Android. Aplikasi ini tidak hanya bisa mengunduh lagu, tapi juga video, film, TV show, dan konten lainnya dari berbagai situs seperti Facebook, Instagram, TikTok, dan lainnya. Aplikasi ini juga memiliki fitur pemutar musik dan video yang bisa kita gunakan untuk memutar file-file yang sudah diunduh.
-
Fitur dan Kelebihan VidMate
-
Berikut ini adalah beberapa fitur dan kelebihan VidMate:
-
-
Mendukung berbagai format audio dan video, seperti MP3, MP4, M4A, 3GP, FLV, WEBM, dan lainnya.
-
Mendukung berbagai resolusi video, mulai dari 144p hingga 4K.
-
Mendukung pengunduhan batch, yaitu mengunduh beberapa file sekaligus.
-
Mendukung pengunduhan latar belakang, yaitu mengunduh file tanpa mengganggu aktivitas lainnya di perangkat.
-
Mendukung pengunduhan cepat dengan teknologi multithreading yang membagi file menjadi beberapa bagian.
-
Mendukung pengelolaan file yang mudah dengan fitur kategori, favorit, riwayat, dan lainnya.
-
Mendukung pembaruan otomatis untuk menambahkan fitur-fitur baru dan memperbaiki bug.
-
-
Cara Menggunakan VidMate untuk Download Lagu dari Youtube
-
Berikut ini adalah cara menggunakan VidMate untuk download lagu dari Youtube:
-
download lagu dari youtube ke mp3 tanpa aplikasi
-download lagu dari youtube dengan kualitas tinggi
-download lagu dari youtube menggunakan vidmate
-download lagu dari youtube secara offline
-download lagu dari youtube di android
-download lagu dari youtube tanpa iklan
-download lagu dari youtube playlist
-download lagu dari youtube channel
-download lagu dari youtube lewat link
-download lagu dari youtube dengan insTube
-download lagu dari youtube dengan tubemate
-download lagu dari youtube dengan winx hd video converter deluxe
-download lagu dari youtube dengan videoder
-download lagu dari youtube dengan newpipe
-download lagu dari youtube dengan videobuddy
-download lagu dari youtube dengan dentex youtube video downloader
-download lagu dari youtube dengan ayatube video downloader
-download lagu dari youtube dengan mtube
-download lagu dari youtube dengan fvdtube youtube downloader
-download lagu dari youtube dengan savefrom.net
-download lagu dari youtube dengan youtube go
-download lagu dari youtube dengan videoder xnxubd
-download lagu dari youtube dengan keepvid
-download lagu dari youtube dengan easeus video downloader
-download lagu dari youtube dengan arktube
-download lagu dari youtube dengan snaptube
-download lagu dari video instagram tanpa aplikasi
-download lagu dari video tiktok tanpa aplikasi
-download lagu dari video facebook tanpa aplikasi
-download lagu dari video twitter tanpa aplikasi
-cara mudah dan cepat download lagu dari youtube apk
-cara gratis dan aman download lagu dari youtube apk
-cara terbaik dan praktis download lagu dari youtube apk
-cara mengubah format video menjadi mp3 saat download lagu dari youtube apk
-cara memilih resolusi dan kualitas video saat download lagu dari youtube apk
-cara mengunduh video dan audio secara bersamaan saat download lagu dari youtube apk
-cara mengunduh video dan audio secara terpisah saat download lagu dari youtube apk
-cara mengunduh video dan audio dalam batch saat download lagu dari youtube apk
-cara mengunduh video dan audio dalam 4k, 2k, 1080p saat download lagu dari youtube apk
-cara mengunduh video dan audio dalam mp3 hingga 320kbps saat download lagu dari youtube apk
-cara login ke akun youtube saat menggunakan aplikasi download lagu dari youtube apk
-cara menemukan video yang diinginkan dalam riwayat pencarian saat menggunakan aplikasi download lagu dari youtube apk
-cara mengunduh reels dan igtv instagram saat menggunakan aplikasi download lagu dari youtube apk
-cara mengunduh stories dan live facebook saat menggunakan aplikasi download lagu dari youtube apk
-cara mengunduh trending dan viral tiktok saat menggunakan aplikasi download lagu dari youtube apk
-
-
Unduh dan instal aplikasi VidMate di perangkat Android kita. Kita bisa mendapatkan aplikasi ini dari situs resminya atau dari toko aplikasi pihak ketiga.
-
Buka aplikasi VidMate dan ketuk ikon Youtube di halaman utama.
-
Cari lagu yang ingin kita unduh di kolom pencarian Youtube atau pilih dari daftar rekomendasi.
-
Ketuk ikon unduh di bagian bawah layar dan pilih format audio yang diinginkan. Kita juga bisa memilih kualitas audio sesuai dengan ukuran file.
-
Tunggu proses pengunduhan selesai. Kita bisa melihat status pengunduhan di menu Unduhan.
-
Setelah selesai, kita bisa memutar lagu yang sudah diunduh di menu Musik atau menggunakan aplikasi pemutar musik lainnya.
-
TubeMate
-
TubeMate adalah aplikasi download lagu dari Youtube terbaik lainnya yang bisa kita gunakan di Android. Aplikasi ini memiliki tampilan yang mirip dengan Youtube, sehingga kita bisa dengan mudah mencari dan memilih lagu yang ingin kita unduh. Aplikasi ini juga memiliki fitur-fitur menarik seperti mode cepat, mode latar belakang, mode playlist, dan lainnya.
-
Fitur dan Kelebihan TubeMate
-
Berikut ini adalah beberapa fitur dan kelebihan TubeMate:
-
-
Mendukung berbagai format audio dan video, seperti MP3, MP4, M4A, 3GP, FLV, WEBM, OGG, dan lainnya.
-
Mendukung berbagai resolusi video, mulai dari 144p hingga 4K.
-
Mendukung pengunduhan cepat dengan mode cepat yang memanfaatkan koneksi internet yang tersedia.
-
Mendukung pengunduhan latar belakang yang memungkinkan kita untuk melakukan aktivitas lainnya di perangkat saat mengunduh lagu.
-
Mendukung pengunduhan playlist yang memungkinkan kita untuk mengunduh beberapa lagu sekaligus dalam satu folder.
-
Mendukung pengelolaan file yang mudah dengan fitur daftar unduhan, daftar putar, daftar favorit, dan lainnya.
-
Mendukung integrasi dengan media sosial seperti Facebook, Twitter, Instagram, dan lainnya untuk membagikan lagu-lagu yang sudah diunduh.
-
-
Cara Menggunakan TubeMate untuk Download Lagu dari Youtube
-
Berikut ini adalah cara menggunakan TubeMate untuk download lagu dari Youtube:
-
-
Unduh dan instal aplikasi TubeMate di perangkat Android kita. Kita bisa mendapatkan aplikasi ini dari situs resminya atau dari toko aplikasi pihak ketiga.
-
Buka aplikasi TubeMate dan ketuk ikon Youtube di halaman utama.
-
Cari lagu yang ingin kita unduh di kolom pencarian Youtube atau pilih dari daftar rekomendasi.
-
Ketuk ikon unduh di bagian kanan atas layar dan pilih format audio yang diinginkan. Kita juga bisa memilih kualitas audio sesuai dengan ukuran file.
-
Tunggu proses pengunduhan selesai. Kita bisa melihat status pengunduhan di menu Unduhan.
-
Setelah selesai, kita bisa memutar lagu yang sudah diunduh di menu Musik atau menggunakan aplikasi pemutar musik lainnya.
-
InsTube
-
InsTube adalah aplikasi download lagu dari Youtube terbaik yang bisa kita gunakan di Android. Aplikasi ini tidak hanya bisa mengunduh lagu, tapi juga video, film, TV show, dan konten lainnya dari lebih dari 100 situs seperti Facebook, Instagram, TikTok, SoundCloud, dan lainnya. Aplikasi ini juga memiliki fitur-fitur canggih seperti pengunci video, pengunduhan HD, dan lainnya.
-
Fitur dan Kelebihan InsTube
-
Berikut ini adalah beberapa fitur dan kelebihan InsTube:
-
-
Mendukung berbagai format audio dan video, seperti MP3, MP4, M4A, 3GP, FLV, WEBM, OGG, dan lainnya.
-
Mendukung berbagai resolusi video, mulai dari 144p hingga 4K.
-
Mendukung pengunduhan HD dengan kualitas gambar yang jernih dan suara yang jelas.
-
Mendukung pengunci video yang bisa melindungi file-file video yang sudah diunduh dengan kata sandi.
-
Mendukung pengunduhan cepat dengan teknologi multithreading yang mempercepat proses pengunduhan.
-
Mendukung pengelolaan file yang mudah dengan fitur kategori, favorit, riwayat, dan lainnya.
-
Mendukung integrasi dengan media sosial seperti Facebook, Twitter, Instagram, dan lainnya untuk membagikan lagu-lagu yang sudah diunduh.
-
-
Cara Menggunakan InsTube untuk Download Lagu dari Youtube
-
Berikut ini adalah cara menggunakan InsTube untuk download lagu dari Youtube:
-
-
Unduh dan instal aplikasi InsTube di perangkat Android kita. Kita bisa mendapatkan aplikasi ini dari situs resminya atau dari toko aplikasi pihak ketiga.
-
Buka aplikasi InsTube dan ketuk ikon Youtube di halaman utama.
-
Cari lagu yang ingin kita unduh di kolom pencarian Youtube atau pilih dari daftar rekomendasi.
-
Ketuk ikon unduh di bagian kanan bawah layar dan pilih format audio yang diinginkan. Kita juga bisa memilih kualitas audio sesuai dengan ukuran file.
-
Tunggu proses pengunduhan selesai. Kita bisa melihat status pengunduhan di menu Unduhan.
-
Setelah selesai, kita bisa memutar lagu yang sudah diunduh di menu Musik atau menggunakan aplikasi pemutar musik lainnya.
-
SnapTube
-
SnapTube adalah aplikasi download lagu dari Youtube terbaik yang bisa kita gunakan di Android. Aplikasi ini memiliki tampilan yang sederhana dan mudah digunakan. Aplikasi ini juga memiliki fitur-fitur unik seperti mode malam, mode hemat data, mode VIP, dan lainnya.
-
Fitur dan Kelebihan SnapTube
-
Berikut ini adalah beberapa fitur dan kelebihan SnapTube:
-
-
Mendukung berbagai format audio dan video, seperti MP3, MP4, M4A, 3GP, FLV, WEBM, OGG, dan lainnya.
-
Mendukung berbagai resolusi video, mulai dari 144p hingga 4K.
-
Mendukung mode malam yang bisa mengubah warna latar belakang menjadi gelap untuk menghemat baterai dan melindungi mata.
-
Mendukung mode hemat data yang bisa mengurangi penggunaan data saat mengunduh atau streaming video.
-
Mendukung mode VIP yang bisa menghapus iklan dan memberikan fitur-fitur eksklusif lainnya.
-
Mendukung pengunduhan cepat dengan teknologi multithreading yang mempercepat proses pengunduhan.
-
Mendukung pengelolaan file yang mudah dengan fitur kategori, favorit, riwayat, dan lainnya.
-
Mendukung integrasi dengan media sosial seperti Facebook, Twitter, Instagram, dan lainnya untuk membagikan lagu-lagu yang sudah diunduh.
-
-
Cara Menggunakan SnapTube untuk Download Lagu dari Youtube
-
Berikut ini adalah cara menggunakan SnapTube untuk download lagu dari Youtube:
-
-
Unduh dan instal aplikasi SnapTube di perangkat Android kita. Kita bisa mendapatkan aplikasi ini dari situs resminya atau dari toko aplikasi pihak ketiga.
-
Buka aplikasi SnapTube dan ketuk ikon Youtube di halaman utama.
-
Cari lagu yang ingin kita unduh di kolom pencarian Youtube atau pilih dari daftar rekomendasi.
-
Ketuk ikon unduh di bagian kanan bawah layar dan pilih format audio yang diinginkan. Kita juga bisa memilih kualitas audio sesuai dengan ukuran file.
-
Tunggu proses pengunduhan selesai. Kita bisa melihat status pengunduhan di menu Unduhan.
-
Setelah selesai, kita bisa memutar lagu yang sudah diunduh di menu Musik atau menggunakan aplikasi pemutar musik lainnya.
-
Fvdtube
-
Fvdtube adalah aplikasi download lagu dari Youtube terbaik yang bisa kita gunakan di Android. Aplikasi ini memiliki tampilan yang simpel dan elegan. Aplikasi ini juga memiliki fitur-fitur menarik seperti pengunduhan lirik, pengunduhan subtitle, pengunduhan playlist, dan lainnya.
-
Fitur dan Kelebihan Fvdtube
-
Berikut ini adalah beberapa fitur dan kelebihan Fvdtube:
-
-
Mendukung berbagai format audio dan video, seperti MP3, MP4, M4A, 3GP, FLV, WEBM, OGG, dan lainnya.
-
Mendukung berbagai resolusi video, mulai dari 144p hingga 4K.
-
Mendukung pengunduhan lirik yang bisa menampilkan lirik lagu saat memutar file audio.
-
Mendukung pengunduhan subtitle yang bisa menampilkan subtitle video saat memutar file video.
-
Mendukung pengunduhan playlist yang bisa mengunduh semua lagu dalam satu playlist sekaligus.
-
Mendukung pengunduhan cepat dengan teknologi multithreading yang mempercepat proses pengunduhan.
-
Mendukung pengelolaan file yang mudah dengan fitur kategori, favorit, riwayat, dan lainnya.
-
Mendukung integrasi dengan media sosial seperti Facebook, Twitter, Instagram, dan lainnya untuk membagikan lagu-lagu yang sudah diunduh.
-
-
Cara Menggunakan Fvdtube untuk Download Lagu dari Youtube
-
Berikut ini adalah cara menggunakan Fvdtube untuk download lagu dari Youtube:
-
-
Unduh dan instal aplikasi Fvdtube di perangkat Android kita. Kita bisa mendapatkan aplikasi ini dari situs resminya atau dari toko aplikasi pihak ketiga.
-
Buka aplikasi Fvdtube dan ketuk ikon Youtube di halaman utama.
-
Cari lagu yang ingin kita unduh di kolom pencarian Youtube atau pilih dari daftar rekomendasi.
-
Ketuk ikon unduh di bagian kanan bawah layar dan pilih format audio yang diinginkan. Kita juga bisa memilih kualitas audio sesuai dengan ukuran file.
-
Tunggu proses pengunduhan selesai. Kita bisa melihat status pengunduhan di menu Unduhan.
-
Setelah selesai, kita bisa memutar lagu yang sudah diunduh di menu Musik atau menggunakan aplikasi pemutar musik lainnya.
-
Ytmp3.cc
-
Ytmp3.cc adalah aplikasi download lagu dari Youtube terbaik yang bisa kita gunakan di Windows, Mac, atau Linux. Aplikasi ini sebenarnya adalah situs web yang bisa kita akses melalui browser. Aplikasi ini sangat mudah digunakan dan tidak memerlukan instalasi atau pendaftaran. Aplikasi ini juga memiliki fitur-fitur sederhana seperti pengunduhan MP3, pengunduhan MP4, dan pengunduhan playlist.
-
Fitur dan Kelebihan Ytmp3.cc
-
Berikut ini adalah beberapa fitur dan kelebihan Ytmp3.cc:
-
-
Mendukung format audio MP3 dan format video MP4.
-
Mendukung resolusi video hingga 1080p.
-
Mendukung pengunduhan playlist yang bisa mengunduh semua lagu dalam satu playlist sekaligus.
-
Mendukung pengunduhan cepat dengan teknologi cloud yang mempercepat proses pengunduhan.
-
Mendukung pengelolaan file yang mudah dengan fitur daftar unduhan dan daftar putar.
-
Mendukung integrasi dengan media sosial seperti Facebook, Twitter, Instagram, dan lainnya untuk membagikan lagu-lagu yang sudah diunduh.
-
-
Cara Menggunakan Ytmp3.cc untuk Download Lagu dari Youtube
-
Berikut ini adalah cara menggunakan Ytmp3.cc untuk download lagu dari Youtube:
-
-
Buka browser di perangkat Windows, Mac, atau Linux kita dan kunjungi situs web Ytmp3.cc.
-
Cari lagu yang ingin kita unduh di kolom pencarian Youtube atau pilih dari daftar rekomendasi.
-
Salin URL video lagu yang ingin kita unduh dari Youtube.
-
Tempel URL video lagu yang sudah disalin di kolom input di situs web Ytmp3.cc.
-
Pilih format audio MP3 atau format video MP4 yang ingin kita unduh.
-
Klik tombol Convert dan tunggu proses konversi selesai.
-
Klik tombol Download dan tunggu proses pengunduhan selesai.
-
Setelah selesai, kita bisa memutar lagu yang sudah diunduh di perangkat kita menggunakan aplikasi pemutar musik atau video yang sesuai.
-
TubePaw
-
TubePaw adalah aplikasi download lagu dari Youtube terbaik yang bisa kita gunakan di Windows, Mac, atau Linux. Aplikasi ini memiliki tampilan yang modern dan elegan. Aplikasi ini juga memiliki fitur-fitur canggih seperti pengunduhan 4K, pengunduhan 360 derajat, pengunduhan VR, dan lainnya.
-
Fitur dan Kelebihan TubePaw
-
Berikut ini adalah beberapa fitur dan kelebihan TubePaw:
-
-
Mendukung berbagai format audio dan video, seperti MP3, MP4, M4A, 3GP, FLV, WEBM, OGG, dan lainnya.
-
Mendukung berbagai resolusi video, mulai dari 144p hingga 4K.
-
Mendukung pengunduhan 4K yang bisa mengunduh video dengan kualitas gambar yang sangat tinggi.
-
Mendukung pengunduhan 360 derajat yang bisa mengunduh video dengan sudut pandang yang luas.
-
Mendukung pengunduhan VR yang bisa mengunduh video dengan efek virtual reality.
-
Mendukung pengunduhan cepat dengan teknologi multithreading yang mempercepat proses pengunduhan.
-
Mendukung pengelolaan file yang mudah dengan fitur kategori, favorit, riwayat, dan lainnya.
-
Mendukung integrasi dengan media sosial seperti Facebook, Twitter, Instagram, dan lainnya untuk membagikan lagu-lagu yang sudah diunduh.
-
-
Cara Menggunakan TubePaw untuk Download Lagu dari Youtube
-
Berikut ini adalah cara menggunakan TubePaw untuk download lagu dari Youtube:
-
-
Unduh dan instal aplikasi TubePaw di perangkat Windows, Mac, atau Linux kita. Kita bisa mendapatkan aplikasi ini dari situs resminya atau dari toko aplikasi pihak ketiga.
-
Buka aplikasi TubePaw dan ketik nama lagu atau URL video lagu yang ingin kita unduh di kolom pencarian.
-
Pilih format audio atau video yang ingin kita unduh. Kita juga bisa memilih kualitas audio atau video sesuai dengan ukuran file.
-
Klik tombol Download dan tunggu proses pengunduhan selesai.
-
Setelah selesai, kita bisa memutar lagu yang sudah diunduh di perangkat kita menggunakan aplikasi pemutar musik atau video yang sesuai.
-
-
Kesimpulan
-
Aplikasi download lagu dari Youtube adalah aplikasi yang bisa mengunduh video musik di Youtube dan mengonversinya menjadi file audio MP3. Dengan aplikasi ini, kita bisa mendengarkan lagu secara offline tanpa perlu koneksi internet atau streaming online. Ada banyak aplikasi download lagu dari Youtube terbaik yang bisa kita gunakan di Android, Windows, Mac, atau Linux. Beberapa contohnya adalah VidMate, TubeMate, InsTube, SnapTube, Fvdtube, Ytmp3.cc, dan TubePaw. Setiap aplikasi memiliki fitur dan kelebihan masing-masing. Kita bisa memilih aplikasi yang sesuai dengan kebutuhan dan selera kita. Namun, sebelum kita menggunakan aplikasi download lagu dari Youtube, kita harus memperhatikan beberapa hal seperti hak cipta, keamanan, ruang penyimpanan, dan koneksi internet.
-
FAQ
-
Berikut ini adalah beberapa pertanyaan yang sering diajukan tentang aplikasi download lagu dari Youtube:
-
-
Apakah aplikasi download lagu dari Youtube legal?
-
Aplikasi download lagu dari Youtube tidak sepenuhnya legal karena melanggar hak cipta pemilik konten. Namun, jika kita hanya mengunduh lagu untuk keperluan pribadi dan tidak menyebarluaskan atau menjualnya kepada pihak lain, maka kita tidak akan mendapat masalah hukum. Namun, kita tetap harus menghormati hak cipta pemilik konten dan tidak mengunduh lagu-lagu yang dilindungi oleh lisensi.
-
Apakah aplikasi download lagu dari Youtube aman?
-
Aplikasi download lagu dari Youtube tidak sepenuh apun berlangganan atau mendaftar. Namun, ada juga beberapa aplikasi yang menawarkan fitur-fitur premium yang memerlukan biaya tambahan. Kita bisa memilih aplikasi yang sesuai dengan anggaran dan kebutuhan kita.
-
Apakah aplikasi download lagu dari Youtube bisa mengunduh lagu dari situs lain?
-
Aplikasi download lagu dari Youtube tidak hanya bisa mengunduh lagu dari Youtube, tapi juga dari situs-situs lain seperti Facebook, Instagram, TikTok, SoundCloud, dan lainnya. Namun, tidak semua aplikasi mendukung situs-situs tersebut. Kita harus memeriksa daftar situs yang didukung oleh aplikasi yang kita pilih sebelum mengunduh lagu dari situs lain.
-
Apakah aplikasi download lagu dari Youtube bisa mengunduh lagu dengan lirik?
-
Aplikasi download lagu dari Youtube bisa mengunduh lagu dengan lirik jika kita memilih format audio yang mendukung lirik, seperti M4A atau OGG. Namun, tidak semua aplikasi memiliki fitur ini. Kita harus memeriksa fitur-fitur yang ditawarkan oleh aplikasi yang kita pilih sebelum mengunduh lagu dengan lirik.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Standoff 2 MOD APK v0.23.2 with Unlimited Money and Gold.md b/spaces/fatiXbelha/sd/Download Standoff 2 MOD APK v0.23.2 with Unlimited Money and Gold.md
deleted file mode 100644
index dc5a58d984f37d582c6bf73fe93498d20ea78b27..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Standoff 2 MOD APK v0.23.2 with Unlimited Money and Gold.md
+++ /dev/null
@@ -1,117 +0,0 @@
-
-
Standoff 2 Mod Apk New Version: Everything You Need to Know
-
If you are a fan of first-person shooter games on mobile devices, you have probably heard of Standoff 2. It is one of the most popular and realistic FPS games on Android and iOS platforms, with over 200 million players worldwide. It features stunning graphics, smooth gameplay, diverse weapons, competitive modes, and regular updates.
But what if you want to enhance your gaming experience even more? What if you want to unlock all the weapons, skins, stickers, and charms without spending any money? What if you want to have an unfair advantage over your opponents with aimbot and wallhack? Well, that's where a mod apk comes in.
-
A mod apk is a modified version of an original app that has been altered by third-party developers or hackers to add or remove certain features. By using a mod apk, you can bypass the limitations and restrictions imposed by the original app developers. You can also access premium content or functions for free.
-
However, using a mod apk also comes with some drawbacks and risks. You may encounter bugs, errors, or crashes that can ruin your gameplay. You may also expose your device or account to security threats such as viruses, malware, or hackers. You may also violate the terms of service or user agreement of the original app developers. You may also face legal consequences for infringing their intellectual property rights.
-
standoff 2 mod apk latest version download
-standoff 2 mod apk unlimited money and gold
-standoff 2 mod apk menu hack
-standoff 2 mod apk aimbot and wallhack
-standoff 2 mod apk god mode and infinite ammo
-standoff 2 mod apk all skins unlocked
-standoff 2 mod apk anti ban and no root
-standoff 2 mod apk free shopping and upgrade
-standoff 2 mod apk offline and online mode
-standoff 2 mod apk obb file and data
-standoff 2 mod apk android and ios
-standoff 2 mod apk no ads and premium features
-standoff 2 mod apk high graphics and sound quality
-standoff 2 mod apk realistic physics and gameplay
-standoff 2 mod apk new maps and weapons
-standoff 2 mod apk custom matches and tournaments
-standoff 2 mod apk ranked mode and leaderboards
-standoff 2 mod apk clans and friends system
-standoff 2 mod apk chat and voice communication
-standoff 2 mod apk tips and tricks guide
-standoff 2 mod apk review and rating
-standoff 2 mod apk update and patch notes
-standoff 2 mod apk bug fixes and performance improvements
-standoff 2 mod apk installation and requirements
-standoff 2 mod apk support and feedback
-standoff 2 hack mod apk download link
-how to install standoff 2 hack mod apk
-how to use standoff 2 hack mod apk menu
-how to get unlimited money in standoff 2 hack mod apk
-how to unlock all skins in standoff 2 hack mod apk
-how to activate aimbot in standoff 2 hack mod apk
-how to enable god mode in standoff 2 hack mod apk
-how to avoid ban in standoff 2 hack mod apk
-how to play offline in standoff 2 hack mod apk
-how to play online with friends in standoff 2 hack mod apk
-how to join custom matches in standoff 2 hack mod apk
-how to create a clan in standoff 2 hack mod apk
-how to rank up in standoff 2 hack mod apk
-how to win every match in standoff 2 hack mod apk
-how to improve your skills in standoff 2 hack mod apk
-best settings for standoff 2 hack mod apk
-best weapons for standoff 2 hack mod apk
-best maps for standoff 2 hack mod apk
-best strategies for standoff 2 hack mod apk
-best tips and tricks for standoff 2 hack mod apk
-best review for standoff 2 hack mod apk
-best rating for standoff 2 hack mod apk
-latest update for standoff 2 hack mod apk
-latest patch notes for standoff 2 hack mod apk
-
So, how can you download and install Standoff 2 mod apk new version on your device? Here are the steps you need to follow:
-
How to download and install Standoff 2 mod apk new version
-
-
Go to the APKVIPO website and search for the "Standoff 2" keyword. Click on the "Download" button above or below the article. Choose the Standoff 2 mod apk version or Standoff 2 APK to download. Once the download is complete, click on the downloaded file to install the game.
-
If you have not enabled the installation of apps from unknown sources, you need to do so by going to your device settings, security, and toggle on the "Unknown sources" option.
-
After installing the game, you need to download the OBB file from the same website. Extract the OBB file and copy it to the Android/OBB folder on your device storage.
-
Launch the game and enjoy the mod features.
-
-
Note: You may need to uninstall the original version of Standoff 2 if you have it on your device before installing the mod apk. You may also need to have a stable internet connection and enough storage space on your device. The mod apk is compatible with Android 4.4 and above devices.
-
What are the features of Standoff 2 mod apk new version
-
Standoff 2 mod apk new version offers a lot of amazing features that can make your gameplay more fun and exciting. Here are some of them:
-
-
Unlimited money and gold: You can get unlimited money and gold in the game, which you can use to buy or upgrade weapons, skins, stickers, charms, and other items. You can also use them to unlock crates, cases, or bundles that contain rare or exclusive items.
-
All weapons unlocked and upgraded: You can access all the weapons in the game, including pistols, rifles, shotguns, SMGs, snipers, knives, grenades, and more. You can also upgrade them to increase their damage, accuracy, range, fire rate, and other stats.
-
Aimbot and wallhack: You can enable aimbot and wallhack features in the game, which can help you aim better and see through walls. You can also adjust the settings of these features to suit your preference and style.
-
Custom skins and stickers: You can customize your weapons and characters with different skins and stickers that can change their appearance and style. You can also create your own skins and stickers using the in-game editor.
-
Anti-ban and anti-cheat protection: You can play the game without worrying about getting banned or detected by the anti-cheat system of Standoff 2. The mod apk has a built-in anti-ban and anti-cheat protection that can prevent any unwanted consequences.
-
-
How to play Standoff 2 mod apk new version
-
Standoff 2 mod apk new version is easy to play and enjoy. Here are some tips and tricks to improve your skills and performance in the game:
-
-
Choose your weapon wisely: Different weapons have different advantages and disadvantages in different situations. Choose a weapon that suits your playstyle and strategy. For example, if you prefer close-range combat, you may want to use a shotgun or a SMG. If you prefer long-range combat, you may want to use a sniper or a rifle.
-
Use cover and movement: Don't expose yourself too much to enemy fire. Use cover such as walls, boxes, cars, or barrels to protect yourself from bullets. Also, move around constantly to avoid being an easy target. Use sprinting, jumping, crouching, or sliding to dodge or surprise your enemies.
-
Communicate with your team: Standoff 2 is a team-based game that requires coordination and cooperation among teammates. Use voice chat or text chat to communicate with your team members. Share information such as enemy location, health status, weapon type, or strategy. Also, listen to your team leader or follow their commands.
-
Learn the game modes and maps: Standoff 2 has various game modes such as deathmatch, defuse, arms race, capture the flag, or custom games. Each game mode has different rules and objectives that you need to follow. Learn how each game mode works and what you need to do to win. Also, learn the maps of Standoff 2 such as sandstone, province, rust belt, zone 9, or old town. Each map has different layouts, terrains, and features that you need to familiarize yourself with. Learn the best spots, routes, and angles to attack or defend.
-
Participate in challenges and tournaments: Standoff 2 has various challenges and tournaments that you can join to test your skills and compete with other players. You can also win rewards such as money, gold, weapons, skins, or stickers. Some of the challenges and tournaments are seasonal, weekly, daily, or special events.
-
-
Conclusion
-
Standoff 2 mod apk new version is a great way to enjoy Standoff 2 with more features and fun. You can download and install it easily on your device and play it with unlimited money, gold, weapons, skins, stickers, aimbot, wallhack, and more. You can also improve your skills and performance with some tips and tricks.
-
However, you should also be aware of the risks and consequences of using a mod apk. You may face technical issues, security threats, or legal actions. You may also lose your account or get banned from the game. You should use a mod apk at your own risk and discretion.
-
If you are interested in trying Standoff 2 mod apk new version, you can download it from the link below. But before you do that, make sure you read the disclaimer and warning carefully.
-
Disclaimer and warning: This article is for educational and informational purposes only. We do not endorse or promote the use of mod apks or any other illegal or unethical activities. We are not responsible for any damage or harm caused by the use of mod apks or any other content or links provided in this article. Use them at your own risk and discretion.
-
FAQs
-
-
-
Question
-
Answer
-
-
-
Is Standoff 2 mod apk safe to use?
-
Standoff 2 mod apk is not officially endorsed or supported by the developers of Standoff 2. It may contain viruses, malware, or other harmful code that can damage your device or compromise your personal data. Use it at your own risk and discretion.
-
-
-
Is Standoff 2 mod apk legal to use?
-
Standoff 2 mod apk may violate the terms of service and user agreement of Standoff 2. It may also infringe the intellectual property rights of the developers or other parties. Using a mod apk may result in account suspension, ban, or legal action.
-
-
-
How to update Standoff 2 mod apk?
-
Standoff 2 mod apk may not be compatible with the latest version of Standoff 2. To update the mod apk, you need to download and install the latest version of the mod apk file from a reliable source. You may also need to uninstall and reinstall the game to avoid any errors or glitches.
-
-
-
How to uninstall Standoff 2 mod apk?
-
To uninstall Standoff 2 mod apk, you need to go to your device settings, find the app manager, select Standoff 2, and tap on uninstall. You may also need to delete any residual files or folders related to the mod apk from your device storage.
-
-
-
Where can I find more information about Standoff 2 mod apk?
-
You can find more information about Standoff 2 mod apk from online forums, blogs, videos, or reviews. However, be careful about the sources you trust and verify the information before following any advice or instructions.
-
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download and Install Metal Slug Awakening APK - The Best 3D Shooter Game for Android.md b/spaces/fatiXbelha/sd/Download and Install Metal Slug Awakening APK - The Best 3D Shooter Game for Android.md
deleted file mode 100644
index 39e658b5ff9ffbd6b8acf1e333d143c60ec20101..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download and Install Metal Slug Awakening APK - The Best 3D Shooter Game for Android.md
+++ /dev/null
@@ -1,158 +0,0 @@
-
-
Metal Slug: Awakening Download APK: How to Play the Remake of the Classic Arcade Game
-
If you are a fan of classic arcade games, then you probably know about Metal Slug. This legendary side-scrolling shooter has been entertaining gamers for decades with its fast-paced action, stunning graphics, and addictive gameplay. Now, you can enjoy a remake of this game on your mobile device with Metal Slug: Awakening.
-
Metal Slug: Awakening is a run-and-gun title for iOS and Android published by Tencent Games and developed by its subsidiary TiMi Studios. The game was released in China on April 18, 2023 with an unknown release date on later countries. It features 3D graphics, smooth animations, and various modes and features that make it a worthy successor to the original game.
In this article, we will show you how to download Metal Slug: Awakening APK for Android, how to play the game, and some tips and tricks to help you master it. Let's get started!
-
How to Download Metal Slug: Awakening APK for Android
-
If you want to play Metal Slug: Awakening on your Android device, you will need to download the APK file from a reliable source. APK stands for Android Package Kit, and it is a file format that contains all the necessary components to install an app on your device. However, before you download the APK file, you will need to make sure that your device meets the following requirements:
-
-
Android version 5.0 or higher
-
At least 4 GB of free storage space
-
A stable internet connection
-
-
Once you have checked these requirements, you can follow these steps to download and install Metal Slug: Awakening APK:
-
-
Go to a trusted website that offers Metal Slug: Awakening APK download link. For example, you can use [JalanTikus](^4^), which is a popular Indonesian website that provides various apps and games for Android users.
-
Tap on the download button and wait for the APK file to be downloaded on your device.
-
Once the download is complete, locate the APK file in your device's file manager and tap on it.
-
You may see a warning message that says "Install unknown apps". This is because you are installing an app from a source other than Google Play Store. To proceed, tap on "Settings" and enable the option "Allow from this source".
-
Go back to the APK file and tap on it again. This time, you should see an installation screen. Tap on "Install" and wait for the app to be installed on your device.
-
Once the installation is done, you can launch the app from your app drawer or home screen.
-
-
Congratulations! You have successfully downloaded and installed Metal Slug: Awakening APK on your Android device. Now, you can enjoy playing this amazing game anytime and anywhere.
-
How to Play Metal Slug: Awakening
-
Metal Slug: Awakening is a game that combines the classic elements of Metal Slug with some new features and modes that make it more fun and challenging. Here are some of the things you need to know about playing Metal Slug: Awakening:
-
Gameplay Features and Modes
-
Metal Slug: Awakening has various gameplay features and modes that offer different experiences and challenges. Some of them are:
-
-
Main Dungeon: This is the main mode of the game, where you have to complete missions and stages based on the original Metal Slug games. You can choose from different difficulty levels and earn rewards such as coins, gems, weapons, tanks, and characters.
-
PVP: This is the mode where you can compete with other players online in real-time battles. You can choose from different modes such as Team Deathmatch , Capture the Flag, and Ultimate Duel. You can also join the Sky Arena, where you can fight in the air with your tanks and planes.
-
PVE: This is the mode where you can team up with other players or AI allies to fight against waves of enemies and bosses. You can choose from different modes such as Survival, Boss Rush, and Raid. You can also join the World Adventure, where you can explore different regions and collect resources and rewards.
-
-
Characters and Weapons
-
Metal Slug: Awakening has a variety of characters and weapons that you can use to customize your gameplay style. Some of them are:
-
metal slug awakening free download android
-metal slug awakening ios download link
-metal slug awakening apk mod unlimited money
-metal slug awakening 3d game download
-metal slug awakening playstation 1 nostalgia
-metal slug awakening hd graphics apk
-metal slug awakening latest version 2023
-metal slug awakening mobile app game
-metal slug awakening side-scrolling shooter
-metal slug awakening update free download
-metal slug awakening offline mode apk
-metal slug awakening english language version
-metal slug awakening apk + obb data
-metal slug awakening best emulator for android
-metal slug awakening tips and tricks guide
-metal slug awakening gameplay video review
-metal slug awakening characters and weapons
-metal slug awakening cheats and hacks apk
-metal slug awakening online multiplayer mode
-metal slug awakening original soundtrack download
-metal slug awakening system requirements android
-metal slug awakening how to install apk
-metal slug awakening legendary game series
-metal slug awakening fan-made mod apk
-metal slug awakening new features and improvements
-metal slug awakening classic arcade game remake
-metal slug awakening download size and speed
-metal slug awakening compatible devices list
-metal slug awakening ratings and feedbacks
-metal slug awakening official website and support
-
-
-
Character
-
Ability
-
-
-
Marco Rossi
-
Increases the damage of all weapons by 10%.
-
-
-
Tarma Roving
-
Increases the durability of all vehicles by 20%.
-
-
-
Eri Kasamoto
-
Increases the number of grenades by 2.
-
-
-
Fio Germi
-
Increases the ammo capacity of all weapons by 20%.
-
-
-
Nadia Cassel
-
Increases the critical rate of all weapons by 10%.
-
-
-
Trevor Spacey
-
Increases the movement speed by 10%.
-
-
-
Nadia Cassel
-
Increases the critical rate of all weapons by 10%.
-
-
-
Ralf Jones
-
Can use a Vulcan Punch that deals massive damage to enemies.
-
-
-
Clark Still
-
Can use a Super Argentine Backbreaker that throws enemies to the ground.
-
-
-
Leona Heidern
-
Can use a Moon Slasher that cuts through enemies with a blade of energy.
-
-
-
Corki The Forest Hunter (New Character)
-
Can use a Bow and Arrow that shoots arrows with different effects.
-
-
-
Metal Slug: Awakening also has a wide range of weapons that you can collect and upgrade. Some of them are:
-
-
Heavy Machine Gun: A rapid-fire weapon that can mow down enemies with ease.
-
Rocket Launcher: A powerful weapon that can launch explosive rockets that deal splash damage.
-
Laser Gun: A futuristic weapon that can fire a continuous beam of energy that pierces through enemies.
-
Flame Shot: A fiery weapon that can shoot flames that burn enemies and spread to nearby targets.
-
Shotgun: A close-range weapon that can shoot pellets that spread out and deal high damage.
-
Double Machine Gun: A dual-wield weapon that can shoot two streams of bullets at once.
-
Zantetsu Sword: A melee weapon that can slash enemies with a sharp blade.
-
Thunder Shot: A shocking weapon that can shoot bolts of electricity that stun enemies and chain to nearby targets.
-
Ak-74 Machine Gun (New Weapon): A modern weapon that can shoot bullets with high accuracy and damage.
-
Double Micro Submachine Gun (New Weapon): A compact weapon that can shoot two bursts of bullets at once.
-
Sniper Rifle (New Weapon): A long-range weapon that can shoot bullets with high precision and damage.
-
Rocket Propelled Grenade (New Weapon): A heavy weapon that can launch grenades that explode on impact and deal splash damage.
-
Ice Gun (New Weapon): A cool weapon that can shoot ice crystals that freeze enemies and slow them down.
-
-
Tips and Tricks
-
Metal Slug: Awakening is a game that requires skill, strategy, and reflexes to master. Here are some tips and tricks to help you improve your performance and enjoy the game more:
-
-
Aim for headshots: Shooting enemies in the head will deal more damage and sometimes cause them to drop items. Try to aim for headshots whenever possible to save ammo and finish enemies faster.
-
Dodge enemy attacks: Enemies will shoot, throw, or charge at you with various attacks. You can dodge them by jumping, sliding, or moving left or right. You can also use vehicles or obstacles to shield yourself from enemy fire. Dodging enemy attacks will help you avoid taking damage and losing lives.
-
Use vehicles and animals: Vehicles and animals are special items that you can find or summon in the game. They can help you move faster, deal more damage, and survive longer. For example, you can use a tank to blast enemies with a cannon, a camel to shoot fireballs, or a monkey to throw bananas. However, be careful as vehicles and animals can also be damaged or destroyed by enemy attacks.
-
Collect items and power-ups: Items and power-ups are scattered throughout the stages or dropped by enemies. They can help you replenish your health, ammo, grenades, or lives. They can also give you temporary boosts such as increased speed, damage, or invincibility. Try to collect as many items and power-ups as you can to enhance your gameplay.
-
Upgrade your characters and weapons: You can use coins and gems to upgrade your characters and weapons in the game. Upgrading your characters will increase their stats and abilities, while upgrading your weapons will increase their damage and ammo capacity. You can also unlock new characters and weapons by completing missions or stages. Upgrading your characters and weapons will make them more effective and powerful in the game.
-
-
Conclusion
-
Metal Slug: Awakening is a game that brings back the nostalgia of the classic arcade game with a modern twist. It has 3D graphics, smooth animations, and various modes and features that make it a fun and exciting game to play. You can download Metal Slug: Awakening APK for Android from a reliable source and install it on your device easily. You can also play the game with different characters and weapons, and use various tips and tricks to improve your performance and enjoy the game more.
-
If you are looking for a game that combines action, adventure, and humor, then Metal Slug: Awakening is the game for you. Download it now and join the battle against the evil forces of General Morden!
-
FAQs
-
Here are some of the frequently asked questions about Metal Slug: Awakening:
-
-
Q: Is Metal Slug: Awakening free to play?
-
A: Yes, Metal Slug: Awakening is free to play, but it also offers in-app purchases that can enhance your gameplay experience.
-
Q: Is Metal Slug: Awakening available for iOS devices?
-
A: Yes, Metal Slug: Awakening is available for iOS devices as well as Android devices.
-
Q: How can I play Metal Slug: Awakening with my friends?
-
A: You can play Metal Slug: Awakening with your friends by joining the PVP or PVE modes online. You can also invite your friends to join your team or challenge them to a duel.
-
Q: How can I get more coins and gems in Metal Slug: Awakening?
-
A: You can get more coins and gems in Metal Slug: Awakening by completing missions and stages, winning battles, collecting items, or buying them with real money.
-
Q: How can I contact the developers of Metal Slug: Awakening?
-
A: You can contact the developers of Metal Slug: Awakening by visiting their official website [here] or following their social media accounts [here] and [here].
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Drive Your Dream Car in the Real World with OculAR APK.md b/spaces/fatiXbelha/sd/Drive Your Dream Car in the Real World with OculAR APK.md
deleted file mode 100644
index 2adbc4e75bc36b0d9de15bd9b3ab3ca4e93ac1df..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Drive Your Dream Car in the Real World with OculAR APK.md
+++ /dev/null
@@ -1,105 +0,0 @@
-
-
OculAR - Drive AR Cars APK: A Review
-
Have you ever dreamed of driving your favorite car in the real world, but without spending a fortune or breaking any laws? If so, you might want to check out OculAR - Drive AR Cars APK, a simulation game that lets you drive realistic cars in augmented reality (AR) using your Android device. In this article, we will review OculAR and tell you why you should give it a try.
-
Features: What can you do with OculAR
-
OculAR is one of the most realistic AR apps available on the Google Play Store for ARCore supported Android devices. It uses modern AR techniques to create immersive and interactive experiences that blend virtual and real worlds. With OculAR, you can:
Drive ultra realistic cars with realistic vehicle physics.
-
Perform stunts, do ramp jumps, and drift like a pro.
-
Place ramps, tires, and other objects to create your own tracks and scenarios.
-
Click pictures and share them with your friends on social media.
-
Choose from 12+ cars, including sports cars, muscle cars, trucks, and more.
-
-
The visuals of OculAR are so stunning that they will sometimes make you believe that it's a real car. You can adjust the size, position, and orientation of the car using simple gestures. You can also switch between different camera modes, such as first-person, third-person, or free camera.
-
How to download and install OculAR
-
Downloading and installing OculAR is very easy. All you need is an ARCore compatible device and an internet connection. Here are the steps to follow:
-
-
Go to the Google Play Store and search for OculAR - Drive AR Cars APK or click on this link.
-
Tap on Install and wait for the app to download.
-
Once the app is installed, open it and grant the necessary permissions for camera and storage.
-
Follow the instructions on the screen to scan your surroundings and place a car.
-
Enjoy driving your dream car in AR!
-
-
Pros and cons of OculAR
-
OculAR is a fun and innovative app that offers a lot of entertainment and excitement for car enthusiasts. However, like any app, it also has some drawbacks. Here are some of the pros and cons of OculAR:
-
-
Pros
Cons
-
- High-quality graphics and sound effects.
- Requires a lot of space and battery power.
-
- Easy to use and customize.
- May not work well in low-light or cluttered environments.
-
- Supports both indoor and outdoor modes.
- Limited number of cars and objects.
-
- Free to download and play.
- Contains ads and in-app purchases.
-
-
Conclusion
-
OculAR - Drive AR Cars APK is a simulation game that lets you drive realistic cars in augmented reality using your Android device. It has many features that make it fun and engaging, such as realistic physics, stunts, ramps, pictures, and more
If you are a fan of cars and AR, you should definitely give OculAR a try. It is one of the best AR apps for Android that will make you feel like you are driving a real car in your own environment. You can have fun with your friends, show off your skills, and create amazing memories with OculAR.
-
So, what are you waiting for? Download OculAR - Drive AR Cars APK today and enjoy the ultimate AR driving experience. And don't forget to share your feedback and suggestions with the developers. They are always working hard to improve the app and add more features and content.
-
ocular-drive ar cars apk download
-ocular-drive ar cars apk latest version
-ocular-drive ar cars apk for android
-ocular-drive ar cars apk free download
-ocular-drive ar cars apk mod
-ocular-drive ar cars apk offline
-ocular-drive ar cars apk update
-ocular-drive ar cars apk hack
-ocular-drive ar cars apk full version
-ocular-drive ar cars apk premium
-ocular-drive ar cars apk review
-ocular-drive ar cars apk gameplay
-ocular-drive ar cars apk features
-ocular-drive ar cars apk requirements
-ocular-drive ar cars apk size
-ocular-drive ar cars apk install
-ocular-drive ar cars apk emulator
-ocular-drive ar cars apk online
-ocular-drive ar cars apk pc
-ocular-drive ar cars apk windows
-ocular-drive ar cars apk mac
-ocular-drive ar cars apk linux
-ocular-drive ar cars apk chromebook
-ocular-drive ar cars apk android tv
-ocular-drive ar cars apk tablet
-ocular-drive ar cars apk smartphone
-ocular-drive ar cars apk samsung
-ocular-drive ar cars apk huawei
-ocular-drive ar cars apk xiaomi
-ocular-drive ar cars apk oppo
-ocular-drive ar cars apk vivo
-ocular-drive ar cars apk oneplus
-ocular-drive ar cars apk realme
-ocular-drive ar cars apk nokia
-ocular-drive ar cars apk motorola
-ocular-drive ar cars apk lg
-ocular-drive ar cars apk sony
-ocular-drive ar cars apk google pixel
-ocular-drive ar cars apk asus
-ocular-drive ar cars apk lenovo
-ocular-drive ar cars apk acer
-ocular-drive ar cars apk dell
-ocular-drive ar cars apk hp
-ocular-drive ar cars apk msi
-ocular-drive ar cars apk razer
-ocular-drive ar cars apk tcl
-ocular-drive ar cars apk hisense
-ocular-drive ar cars apk philips
-ocular-drive ar cars apk sharp
-
FAQs
-
Here are some of the frequently asked questions about OculAR:
-
Q1: What are the requirements for running OculAR?
-
A1: To run OculAR, you need an Android device that supports ARCore, which is Google's platform for building AR experiences. You can check the list of ARCore supported devices here. You also need a stable internet connection and enough storage space on your device.
-
Q2: How realistic are the cars in OculAR?
-
A2: The cars in OculAR are very realistic and detailed. They are modeled after real-life cars and have accurate proportions, colors, textures, and sounds. You can also see the interior of the cars and interact with the steering wheel, pedals, and dashboard.
-
Q3: How can I take pictures and share them with my friends?
-
A3: Taking pictures and sharing them with your friends is very easy in OculAR. You just need to tap on the camera icon on the top right corner of the screen and choose whether you want to take a screenshot or a video. Then, you can edit your picture or video using filters, stickers, text, and more. Finally, you can share your picture or video with your friends on social media platforms such as Facebook, Instagram, WhatsApp, etc.
-
Q4: What are some of the stunts and ramps that I can use in OculAR?
-
A4: OculAR has many stunts and ramps that you can use to make your driving more fun and exciting. You can find them in the objects menu on the bottom left corner of the screen. Some of the stunts and ramps that you can use are:
-
-
Loop: A circular ramp that lets you do a 360-degree loop.
-
Quarter Pipe: A curved ramp that lets you do a vertical jump.
-
Half Pipe: A U-shaped ramp that lets you do a backflip or a frontflip.
-
Ramp: A straight ramp that lets you do a long jump.
-
Bridge: A bridge that lets you cross over a gap or an obstacle.
-
-
Q5: How can I get more cars and objects in OculAR?
-
A5: To get more cars and objects in OculAR, you need to earn coins by driving, performing stunts, taking pictures, and watching ads. You can also buy coins using real money through in-app purchases. Then, you can use your coins to unlock new cars and objects from the shop menu on the top left corner of the screen.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Call of Duty Mobile APK Terbaru for Android - Enjoy the Best FPS Experience on Your Phone.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Call of Duty Mobile APK Terbaru for Android - Enjoy the Best FPS Experience on Your Phone.md
deleted file mode 100644
index 6c74ed7175adf27d334c619575f498f9f8a1ef88..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Call of Duty Mobile APK Terbaru for Android - Enjoy the Best FPS Experience on Your Phone.md
+++ /dev/null
@@ -1,84 +0,0 @@
-
-
How to Download Call of Duty Mobile APK Terbaru
-
If you are a fan of first-person shooter games, you have probably heard of Call of Duty Mobile, one of the most popular and successful mobile games in the world. Call of Duty Mobile is a free-to-play game that brings the thrill and excitement of the Call of Duty franchise to your smartphone. You can play as iconic characters from the series, such as Captain Price, Ghost, Soap, and more, and compete in various multiplayer modes and battle royale on classic maps like Nuketown, Crash, and Hijacked.
But did you know that there is a way to enjoy the latest features and updates of Call of Duty Mobile without waiting for the official release on Google Play Store? Yes, you can download the Call of Duty Mobile APK Terbaru, which is the newest version of the game that has been modified and optimized for Android devices. In this article, we will show you how to download and install Call of Duty Mobile APK Terbaru, as well as some tips and tricks to improve your gameplay. Let's get started!
-
Features of Call of Duty Mobile APK Terbaru
-
Call of Duty Mobile APK Terbaru is not just a regular update, but a major overhaul that adds new features, modes, maps, characters, weapons, and more to the game. Here are some of the highlights:
-
Multiplayer Modes
-
Call of Duty Mobile APK Terbaru offers a variety of multiplayer modes that cater to different play styles and preferences. You can choose from Team Deathmatch, Domination, Kill-Confirmed, Search and Destroy, Hardpoint, Free-for-All, Gun Game, Capture the Flag, and more. You can also customize your match settings, such as time limit, score limit, number of players, etc. Whether you want a fast-paced action or a strategic challenge, you will find a mode that suits you.
-
download call of duty mobile season 5 apk
-download call of duty mobile legends of war apk
-download call of duty mobile apk from uptodown
-download call of duty mobile apk and obb
-download call of duty mobile apk mod menu
-download call of duty mobile apk for android 5.0
-download call of duty mobile apk latest version 2023
-download call of duty mobile apk offline mode
-download call of duty mobile apk highly compressed
-download call of duty mobile apk no verification
-download call of duty mobile apk on pc
-download call of duty mobile apk from google play
-download call of duty mobile apk with zombies mode
-download call of duty mobile apk for ios devices
-download call of duty mobile apk hack unlimited money
-download call of duty mobile apk for free today
-download call of duty mobile apk update file
-download call of duty mobile apk without vpn
-download call of duty mobile apk from official website
-download call of duty mobile apk with controller support
-download call of duty mobile apk for android 4.4
-download call of duty mobile apk full version
-download call of duty mobile apk mirror link
-download call of duty mobile apk for emulator
-download call of duty mobile apk new map
-download call of duty mobile apk data file
-download call of duty mobile apk in india
-download call of duty mobile apk without play store
-download call of duty mobile apk with voice chat
-download call of duty mobile apk for android tv
-download call of duty mobile apk beta version
-download call of duty mobile apk from apkpure
-download call of duty mobile apk with hd graphics
-download call of duty mobile apk for low end devices
-download call of duty mobile apk direct link
-download call of duty mobile apk original file
-download call of duty mobile apk in pakistan
-download call of duty mobile apk without root
-download call of duty mobile apk with multiplayer mode
-download call of duty mobile apk for tablet devices
-
Battle Royale Mode
-
If you are looking for a more immersive and intense experience, you can try the Battle Royale mode in Call of Duty Mobile APK Terbaru. This mode pits you against 99 other players in a massive map that shrinks over time. You can choose to play solo, duo, or squad mode, and select your class from Medic, Scout, Ninja, Clown, Defender, Mechanic, Airborne, or Hacker. You can also find vehicles, weapons, perks, loot boxes, air drops, and other items to help you survive. The last one standing wins!
-
Characters and Weapons
-
One of the
One of the best things about Call of Duty Mobile APK Terbaru is that you can play as your favorite characters from the Call of Duty universe, such as Captain Price, Ghost, Soap, Alex Mason, Frank Woods, John "Soap" MacTavish, and more. You can also unlock new skins, outfits, and accessories for your characters by completing missions, challenges, and events. You can also customize your weapons with different attachments, camos, stickers, and charms. You can choose from a wide range of weapons, such as assault rifles, sniper rifles, shotguns, SMGs, LMGs, pistols, launchers, melee weapons, and more.
-
How to Download and Install Call of Duty Mobile APK Terbaru
-
Now that you know the features of Call of Duty Mobile APK Terbaru, you might be wondering how to download and install it on your device. Don't worry, it's very easy and simple. Just follow these steps:
-
Step 1: Download the APK file from a trusted source
-
The first thing you need to do is to download the APK file of Call of Duty Mobile APK Terbaru from a reliable and secure source. You can use this link to download the file directly to your device. The file size is about 1.8 GB, so make sure you have enough storage space and a stable internet connection.
-
Step 2: Enable unknown sources on your device settings
-
The next thing you need to do is to enable unknown sources on your device settings. This will allow you to install apps that are not from the Google Play Store. To do this, go to your device settings > security > unknown sources > toggle on. You might see a warning message, but don't worry, it's safe to proceed.
-
Step 3: Install the APK file and launch the game
-
The final step is to install the APK file and launch the game. To do this, locate the downloaded file on your device and tap on it. You will see a prompt asking you to install the app. Tap on install and wait for the process to finish. Once done, you can open the app and enjoy playing Call of Duty Mobile APK Terbaru!
-
Tips and Tricks for Playing Call of Duty Mobile APK Terbaru
-
Now that you have successfully downloaded and installed Call of Duty Mobile APK Terbaru, you might want some tips and tricks to improve your gameplay and win more matches. Here are some of them:
-
Adjust your sensitivity and controls for optimal performance
-
One of the most important things to do before playing Call of Duty Mobile APK Terbaru is to adjust your sensitivity and controls for optimal performance. You can do this by going to the settings > controls > custom layout. Here you can change the size, position, and opacity of your buttons, as well as the sensitivity of your aim, movement, and firing. You can also choose between simple mode (auto-fire) or advanced mode (manual fire) depending on your preference.
-
Use headphones and voice chat to communicate with your teammates
-
Another tip for playing Call of Duty Mobile APK Terbaru is to use headphones and voice chat to communicate with your teammates. This will help you coordinate your strategies, call out enemy locations, request backup, and more. You can also use the quick chat feature to send pre-set messages or emojis to your team or opponents. To use voice chat or quick chat, just tap on the microphone or chat icon on the top left corner of your screen.
-
Learn the maps and the best spots to hide, snipe, and ambush
-
The last tip for playing Call of Duty Mobile APK Terbaru is to learn the maps and the best spots to hide, snipe, and ambush. Each map has its own layout, terrain, buildings, vehicles, and other features that can affect your gameplay. You should familiarize yourself with each map and find out where are the best places to take cover, snipe enemies from afar, or ambush them from behind. You can also use vehicles such as helicopters or tanks to move around faster or deal more damage.
-
Conclusion
-
In conclusion, Call of Duty Mobile APK Terbaru is a great way to enjoy the latest features and updates of one of the most popular mobile games in the world. You can download and install it easily by following our guide above. You can also improve your gameplay by following our tips and tricks above. We hope you have fun playing Call of Duty Mobile APK Terbaru!
-
FAQs
-
Q1: Is Call of Duty Mobile APK Terbaru safe to download?
-
A1: Yes, Call of Duty Mobile APK
A1: Yes, Call of Duty Mobile APK Terbaru is safe to download as long as you use a trusted and secure source. We recommend using this link to download the file directly to your device. However, you should always be careful when downloading and installing apps from unknown sources, as they might contain malware or viruses that can harm your device or data.
-
Q2: How much space does Call of Duty Mobile APK Terbaru require?
-
A2: Call of Duty Mobile APK Terbaru requires about 1.8 GB of storage space on your device. You should also have enough free space for additional data and updates that the game might need. You can check your available storage space by going to your device settings > storage.
-
Q3: Can I play Call of Duty Mobile APK Terbaru on PC?
-
A3: Yes, you can play Call of Duty Mobile APK Terbaru on PC by using an Android emulator. An emulator is a software that allows you to run Android apps on your PC. There are many emulators available online, such as BlueStacks, NoxPlayer, LDPlayer, etc. You can download and install any of them on your PC, and then download and install Call of Duty Mobile APK Terbaru on the emulator. However, you should note that playing Call of Duty Mobile APK Terbaru on PC might give you an unfair advantage over other players who are playing on mobile devices, as you can use a keyboard and mouse instead of a touchscreen.
-
Q4: How can I update Call of Duty Mobile APK Terbaru?
-
A4: To update Call of Duty Mobile APK Terbaru, you need to download and install the latest version of the APK file from the same source that you used before. You can use this link to download the file directly to your device. You don't need to uninstall the previous version, as the new version will overwrite it. However, you should always backup your game data before updating, in case something goes wrong.
-
Q5: Where can I find more information about Call of Duty Mobile APK Terbaru?
-
A5: You can find more information about Call of Duty Mobile APK Terbaru by visiting the official website of the game, or by following its social media accounts on Facebook, Twitter, Instagram, YouTube, etc. You can also join the official Discord server or Reddit community of the game, where you can interact with other players, get news and updates, share feedback and suggestions, report bugs and issues, and more.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download That File in Minutes with These Simple Steps.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download That File in Minutes with These Simple Steps.md
deleted file mode 100644
index 3dab59fe96083a1b12ef2e9eea10f1d267d1a989..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download That File in Minutes with These Simple Steps.md
+++ /dev/null
@@ -1,121 +0,0 @@
-
-
How to Download Files from the Internet
-
Downloading files from the internet is a common and useful task that you may need to do for various purposes, such as getting music, videos, documents, software, or images. However, downloading files can also be challenging, especially if you have to deal with large, multiple, or broken downloads. That's why you need a download manager to help you manage your downloads and make them faster, easier, and more reliable.
A download manager is a software tool that can monitor and intercept downloads from web browsers, but can also work independently. A download manager can offer many benefits over using your browser's built-in download function, such as:
-
Benefits of Using a Download Manager
-
-
It can speed up your downloads by using multiple connections and splitting files into smaller parts.
-
It can resume your downloads if they are interrupted by network problems, power outages, or computer crashes.
-
It can organize your downloads by categories, folders, or priorities.
-
It can preview your downloads before they are completed, such as playing audio or video files or viewing images.
-
It can convert your downloads to different formats, such as MP3, MP4, AVI, or ZIP.
-
It can scan your downloads for viruses or malware and ensure their safety.
-
-
Types of Files You Can Download
-
There are many types of files you can download from the internet, depending on your needs and preferences. Some of the most common ones are:
-
-
Music files, such as MP3, WAV, or FLAC.
-
Video files, such as MP4, AVI, or MKV.
-
Document files, such as PDF, DOCX, or TXT.
-
Software files, such as EXE, MSI, or APK.
-
Image files, such as JPG, PNG, or GIF.
-
-
How to Choose the Best Download Manager for Your Needs
-
There are many download managers available for Windows users, but not all of them are created equal. Some may have more features than others, some may be easier to use than others, and some may be more compatible with your browser than others. To choose the best download manager for your needs, you should consider the following factors:
-
Features to Look for in a Download Manager
-
-
The speed and reliability of the downloads. You want a download manager that can make your downloads faster and more stable by using multiple connections and resuming broken downloads.
-
The interface and usability of the download manager. You want a download manager that has a simple and intuitive interface that lets you easily access and control your downloads.
-
The compatibility and integration with your browser. You want a download manager that can work well with your preferred browser and can automatically capture the download links from web pages.
-
The security and privacy of the download manager. You want a download manager that can protect your downloads from viruses or malware and can encrypt your data if needed.
-
The customization and flexibility of the download manager. You want a download manager that can be customized to suit your preferences and needs, such as changing the settings, themes, or languages.
-
-
Top Free Download Managers for Windows
-
To help you choose the best download manager for your needs, we have compiled a list of some of the top free download managers for Windows that offer most of the features mentioned above. Here they are:
-
-
Name
Description
Free Download Manager
-
Free Download Manager is a powerful and easy-to-use download manager that can handle all kinds of downloads, from torrents to videos. It can accelerate your downloads by up to 10 times, resume broken downloads, and schedule your downloads for later. It also has a built-in media converter, a video downloader, and a browser extension that integrates with Chrome, Firefox, Edge, and Opera. Free Download Manager is free and open-source, and supports Windows, Mac, and Linux.
-
How to download a file from the Internet
-How to download an app, file, or program from the web
-How to download files from the web using different browsers
-How to download images, videos, and audio clips from the web
-How to download PDF files from the web and open them
-How to download and install programs from the web
-How to download and run apps from the web
-How to download and save documents from the web
-How to download and view pictures from the web
-How to download browser extensions and toolbars from the web
-How to download files from the web on a computer
-How to download files from the web on a smartphone or tablet
-How to download files from the web on a Chromebook
-How to download files from the web using Google Chrome
-How to download files from the web using Mozilla Firefox
-How to download files from the web using Internet Explorer
-How to download files from the web using Microsoft Edge
-How to download files from the web using Opera
-How to download files from the web using Safari
-How to change the default download location on your PC
-How to find files you've downloaded on your PC
-How to open and run downloaded files on your PC
-How to save and rename downloaded files on your PC
-How to delete downloaded files on your PC
-How to pause and resume downloads on your PC
-How to manage downloads in Download Manager on your PC
-How to view downloads in Library on your PC
-How to clear downloads history on your PC
-How to protect your PC from malicious downloads
-How to scan downloaded files for viruses and malware on your PC
-How to troubleshoot downloading problems on your PC
-How to fix broken or corrupted downloads on your PC
-How to resume interrupted or incomplete downloads on your PC
-How to speed up slow downloads on your PC
-How to download multiple files at once on your PC
-How to download large files faster on your PC
-How to download files in the background on your PC
-How to download files securely and privately on your PC
-How to download files anonymously on your PC
-How to download files without leaving any traces on your PC
-
Ninja Download Manager
-
Ninja Download Manager is a sleek and modern download manager that can boost your download speed by using multiple connections and splitting files into chunks. It can also resume your downloads from where they left off, even if the server does not support it. It has a user-friendly interface that lets you manage your downloads with drag-and-drop, pause and resume, and preview features. It also has a video downloader that can capture videos from popular sites like YouTube and Vimeo. Ninja Download Manager is free for personal use, and supports Windows and Mac.
-
JDownloader
-
JDownloader is a download manager that specializes in downloading files from one-click hosting sites like Rapidshare, Megaupload, Mediafire, and others. It can bypass the captchas and wait times that these sites impose, and can download multiple files at once with premium accounts. It also has a link grabber that can scan web pages for download links, a clipboard monitor that can detect copied links, and a remote control feature that lets you manage your downloads from another device. JDownloader is free and open-source, and supports Windows, Mac, Linux, and other platforms.
How to Use a Download Manager to Download Files
-
Now that you have chosen the best download manager for your needs, you can start using it to download files from the internet. The exact steps may vary depending on the download manager you use, but the general process is similar. Here are the steps to download files with a download manager:
-
Steps to Download Files with a Download Manager
-
-
Install and launch the download manager on your computer.
-
Copy the URL of the file you want to download from your browser or any other source.
-
Paste the URL into the download manager's input box or use the browser extension to capture the link automatically.
-
Choose the destination folder, file name, and other settings for your download.
-
Click on the start or download button to begin your download.
-
Monitor the progress and status of your download in the download manager's interface.
-
Once the download is completed, you can open, play, or view the file from the download manager or from the destination folder.
-
-
Tips and Tricks to Optimize Your Downloads
-
To make the most of your download manager and your downloads, you can follow these tips and tricks:
-
-
Adjust the number of connections and the bandwidth limit according to your network speed and availability.
-
Use a VPN or a proxy server to bypass geo-restrictions or censorship that may prevent you from downloading certain files.
-
Use a checksum or a hash function to verify the integrity and authenticity of your downloads and avoid corrupted or tampered files.
-
Use a scheduler or a timer to start or stop your downloads at specific times or intervals.
-
Use filters or rules to sort your downloads by categories, types, sizes, dates, or other criteria.
-
-
Conclusion
-
Downloading files from the internet is a common and useful task that can be made easier and faster with a download manager. A download manager can offer many benefits over using your browser's built-in download function, such as speeding up your downloads, resuming broken downloads, organizing your downloads, previewing your downloads, converting your downloads, and scanning your downloads. There are many types of files you can download from the internet, such as music, video, document, software, or image files. To choose the best download manager for your needs, you should consider the features, interface, compatibility, security, and customization of the download manager. Some of the top free download managers for Windows are Free Download Manager, Ninja Download Manager, and JDownloader. To use a download manager to download files, you need to copy and paste the URL of the file into the download manager or use the browser extension to capture it automatically. Then you need to choose the destination folder, file name, and other settings for your download. Finally, you need to start your download and monitor its progress and status. To optimize your downloads, you can adjust the number of connections and the bandwidth limit, use a VPN or a proxy server, use a checksum or a hash function, use a scheduler or a timer, and use filters or rules.
-
FAQs
-
Here are some frequently asked questions about downloading files from the internet:
-
-
What is the difference between downloading and streaming?
-Downloading is when you save a file from the internet to your computer or device for later use. Streaming is when you play a file from the internet without saving it to your computer or device. Downloading usually requires more storage space and bandwidth than streaming, but it also allows you to access the file offline and without interruptions.
-
How can I resume a failed or interrupted download?
-If your download fails or is interrupted due to network problems, power outages, or computer crashes, you can try to resume it with your download manager. Most download managers have a resume feature that can continue your download from where it left off. However, this may not work if the server does not support resuming downloads or if the file has been removed or changed.
-
How can I speed up my downloads?
-There are several ways to speed up your downloads, such as using a faster internet connection, closing other applications that use bandwidth, clearing your browser cache and cookies, using multiple connections and splitting files into smaller parts with your download manager, using a VPN or a proxy server to bypass throttling or congestion, and downloading files from reliable and fast servers.
-
How can I protect my downloads from viruses or malware?
-There are several ways to protect your downloads from viruses or malware, such as using a reputable and updated antivirus software on your computer or device, using a download manager that can scan your downloads for viruses or malware before or after downloading them, and avoiding downloading files from suspicious or unknown sources or links.
-
How can I convert my downloads to different formats?
-There are several ways to convert your downloads to different formats, such as using a download manager that has a built-in media converter that can change the format of your audio, video, or image files, using an online converter that can upload and convert your files to various formats, or using a standalone converter that can install and run on your computer or device and convert your files offline.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/send/node_modules/ms/license.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/send/node_modules/ms/license.md
deleted file mode 100644
index fa5d39b6213f8a5e142b643575f99d9149cc71c6..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/send/node_modules/ms/license.md
+++ /dev/null
@@ -1,21 +0,0 @@
-The MIT License (MIT)
-
-Copyright (c) 2020 Vercel, Inc.
-
-Permission is hereby granted, free of charge, to any person obtaining a copy
-of this software and associated documentation files (the "Software"), to deal
-in the Software without restriction, including without limitation the rights
-to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-copies of the Software, and to permit persons to whom the Software is
-furnished to do so, subject to the following conditions:
-
-The above copyright notice and this permission notice shall be included in all
-copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-SOFTWARE.
diff --git a/spaces/fffiloni/lama-video-watermark-remover/README.md b/spaces/fffiloni/lama-video-watermark-remover/README.md
deleted file mode 100644
index 5cb779ba2b31cbb0bdd23a1fb74716caa3b0fae9..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/lama-video-watermark-remover/README.md
+++ /dev/null
@@ -1,44 +0,0 @@
----
-title: LaMa Video Watermark Remover
-emoji: 🌖
-colorFrom: red
-colorTo: green
-sdk: gradio
-sdk_version: 3.0.24
-python_version: 3.7.13
-app_file: app.py
-pinned: false
-duplicated_from: fffiloni/lama
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`python_version`: string
-Any valid Python 3.x or 3.x.x version.
-Defaults to 3.8.9.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/fffiloni/video2openpose2/app.py b/spaces/fffiloni/video2openpose2/app.py
deleted file mode 100644
index 73a129fe57a43ff8dd20cd2325e82b1df5f2d6ae..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/video2openpose2/app.py
+++ /dev/null
@@ -1,135 +0,0 @@
-import gradio as gr
-from controlnet_aux import OpenposeDetector
-import os
-import cv2
-import numpy as np
-from PIL import Image
-from moviepy.editor import *
-
-openpose = OpenposeDetector.from_pretrained('lllyasviel/ControlNet')
-
-def get_frames(video_in):
- frames = []
- #resize the video
- clip = VideoFileClip(video_in)
-
- #check fps
- if clip.fps > 30:
- print("vide rate is over 30, resetting to 30")
- clip_resized = clip.resize(height=512)
- clip_resized.write_videofile("video_resized.mp4", fps=30)
- else:
- print("video rate is OK")
- clip_resized = clip.resize(height=512)
- clip_resized.write_videofile("video_resized.mp4", fps=clip.fps)
-
- print("video resized to 512 height")
-
- # Opens the Video file with CV2
- cap= cv2.VideoCapture("video_resized.mp4")
-
- fps = cap.get(cv2.CAP_PROP_FPS)
- print("video fps: " + str(fps))
- i=0
- while(cap.isOpened()):
- ret, frame = cap.read()
- if ret == False:
- break
- cv2.imwrite('kang'+str(i)+'.jpg',frame)
- frames.append('kang'+str(i)+'.jpg')
- i+=1
-
- cap.release()
- cv2.destroyAllWindows()
- print("broke the video into frames")
-
- return frames, fps
-
-def get_openpose_filter(i):
- image = Image.open(i)
-
- #image = np.array(image)
-
- image = openpose(image)
- #image = Image.fromarray(image)
- image.save("openpose_frame_" + str(i) + ".jpeg")
- return "openpose_frame_" + str(i) + ".jpeg"
-
-def create_video(frames, fps, type):
- print("building video result")
- clip = ImageSequenceClip(frames, fps=fps)
- clip.write_videofile(type + "_result.mp4", fps=fps)
-
- return type + "_result.mp4"
-
-def convertG2V(imported_gif):
- clip = VideoFileClip(imported_gif.name)
- clip.write_videofile("my_gif_video.mp4")
- return "my_gif_video.mp4"
-
-def infer(video_in):
-
-
- # 1. break video into frames and get FPS
- break_vid = get_frames(video_in)
- frames_list= break_vid[0]
- fps = break_vid[1]
- #n_frame = int(trim_value*fps)
- n_frame = len(frames_list)
-
- if n_frame >= len(frames_list):
- print("video is shorter than the cut value")
- n_frame = len(frames_list)
-
- # 2. prepare frames result arrays
- result_frames = []
- print("set stop frames to: " + str(n_frame))
-
- for i in frames_list[0:int(n_frame)]:
- openpose_frame = get_openpose_filter(i)
- result_frames.append(openpose_frame)
- print("frame " + i + "/" + str(n_frame) + ": done;")
-
-
- final_vid = create_video(result_frames, fps, "openpose")
-
- files = [final_vid]
-
- return final_vid, files
-
-title="""
-
-
-
- Video to OpenPose
-
-
-
-
-"""
-
-with gr.Blocks() as demo:
- with gr.Column():
- gr.HTML(title)
- with gr.Row():
- with gr.Column():
- video_input = gr.Video(source="upload", type="filepath")
- gif_input = gr.File(label="import a GIF instead", file_types=['.gif'])
- gif_input.change(fn=convertG2V, inputs=gif_input, outputs=video_input)
- submit_btn = gr.Button("Submit")
-
- with gr.Column():
- video_output = gr.Video()
- file_output = gr.Files()
-
- submit_btn.click(fn=infer, inputs=[video_input], outputs=[video_output, file_output])
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/firefighter/PdfSumGPT/utils/chatgpt.py b/spaces/firefighter/PdfSumGPT/utils/chatgpt.py
deleted file mode 100644
index af350471885e97b7e18652cc624b06e3e6b8eb06..0000000000000000000000000000000000000000
--- a/spaces/firefighter/PdfSumGPT/utils/chatgpt.py
+++ /dev/null
@@ -1,43 +0,0 @@
-import random
-
-import openai
-
-
-class ChatGPTAPI:
- def __init__(self, api_key: str = '', max_input_length: int = 1024):
- openai.api_key = self.load_api_key(api_key)
- self.max_input_length = max_input_length
-
- @staticmethod
- def load_api_key(api_key: str):
- if not api_key:
- try:
- api_key = open('data/api_key.txt', 'r').read()
- except Exception as e:
- raise Exception(f'ChatGPT Error: No API key provided {e}')
-
- if '\n' in api_key:
- api_key_list = api_key.strip().split('\n')
- api_key = random.choice(api_key_list)
- return api_key
-
- def __call__(self, content: str):
- assert isinstance(content, str), 'ChatGPT Error: content must be a string'
- content = content.strip()
- messages = [{'role': 'user', 'content': content}]
- try:
- resp = openai.ChatCompletion.create(
- model="gpt-3.5-turbo",
- messages=messages
- )
- output: str = resp['choices'][0]['message']['content']
- output = output.strip()
- except Exception as e:
- raise Exception(f'ChatGPT Error: {e}')
- return output
-
-
-if __name__ == '__main__':
- chatgpt = ChatGPTAPI()
- response = chatgpt('Hello, how are you?')
- print(response)
diff --git a/spaces/flax-community/code-clippy-problem-solver/app.py b/spaces/flax-community/code-clippy-problem-solver/app.py
deleted file mode 100644
index 085638973f2086140019657ad2f27409796bc6a1..0000000000000000000000000000000000000000
--- a/spaces/flax-community/code-clippy-problem-solver/app.py
+++ /dev/null
@@ -1,160 +0,0 @@
-import urllib
-
-import streamlit as st
-from transformers import AutoModelForCausalLM, AutoTokenizer
-
-# model_name = "flax-community/gpt-neo-1.3B-apps-all"
-model_name = "flax-community/gpt-neo-125M-apps-all"
-
-
-@st.cache(allow_output_mutation=True, max_entries=1)
-def get_model():
- model = AutoModelForCausalLM.from_pretrained(model_name)
- tokenizer = AutoTokenizer.from_pretrained(model_name)
- tokenizer.pad_token = tokenizer.eos_token
- return (model, tokenizer)
-
-
-def format_input(question, starter_code=""):
- answer_type = (
- "\nUse Call-Based format\n" if starter_code else "\nUse Standard Input format\n"
- )
- return f"\nQUESTION:\n{question}\n{starter_code}\n{answer_type}\nANSWER:\n"
-
-
-def clean_text(generation):
- # clean up text has discussed in OpenAI's paper "Evaluating Large Language Models Trained on Code"
- generation = generation.split("\ndef")[0]
- generation = generation.split("\nclass")[0]
- generation = generation.split("\n#")[0]
- generation = generation.split("\nif")[0]
-
- return generation
-
-
-def generate_solution(
- model, tokenizer, question, starter_code="", temperature=1.0, num_beams=1
-):
- prompt = format_input(question, starter_code)
- input_ids = tokenizer(prompt, return_tensors="pt").input_ids
- start = len(input_ids[0])
-
- output = model.generate(
- input_ids,
- max_length=start + 150,
- do_sample=True,
- top_p=0.95,
- pad_token_id=tokenizer.pad_token_id,
- eos_token_id=tokenizer.eos_token_id,
- early_stopping=True,
- temperature=temperature,
- num_beams=int(num_beams),
- no_repeat_ngram_size=None,
- repetition_penalty=None,
- num_return_sequences=None,
- )
- output_str = tokenizer.decode(output[0][start:], skip_special_tokens=True).strip()
- output_str = clean_text(output_str)
-
- return output_str
-
-
-_EXAMPLES = [
- [
- """
-Given a 2D list of size `m * n`. Your task is to find the sum of minimum value in each row.
-For Example:
-```python
-[
- [1, 2, 3, 4, 5], # minimum value of row is 1
- [5, 6, 7, 8, 9], # minimum value of row is 5
- [20, 21, 34, 56, 100] # minimum value of row is 20
-]
-```
-So, the function should return `26` because sum of minimums is as `1 + 5 + 20 = 26`
- """,
- "",
- 0.8,
- ],
- [
- """
-# Personalized greeting
-
-Create a function that gives a personalized greeting. This function takes two parameters: `name` and `owner`.
- """,
- """
-Use conditionals to return the proper message:
-
-case| return
---- | ---
-name equals owner | 'Hello boss'
-otherwise | 'Hello guest'
-def greet(name, owner):
- """,
- 0.8,
- ],
-]
-
-
-def run():
- st.set_page_config(page_title="Code Clippy Problem Solver")
- # sidebar
- st.sidebar.title("Code Clippy")
- st.sidebar.image(
- "https://raw.githubusercontent.com/ncoop57/gpt-code-clippy/camera-ready/code_clippy_logo.jpg",
- caption="(c) awesome Aimee Trevett",
- )
- st.sidebar.markdown("[Github](https://github.com/ncoop57/gpt-code-clippy)")
- st.sidebar.markdown("[Report](https://github.com/ncoop57/gpt-code-clippy/wiki)")
-
- st.sidebar.markdown("### Controls:")
-
- temperature = st.sidebar.slider(
- "Temperature",
- min_value=0.5,
- max_value=1.5,
- value=0.8,
- step=0.1,
- )
- num_beams = st.sidebar.slider(
- "Num beams",
- min_value=1,
- max_value=4,
- step=1,
- )
-
- # main body
- model, tokenizer = get_model()
-
- question = st.text_input(
- "Problem: ",
- value="A function that can greet user by name. Given a name it should say hello to user.",
- help="Text description of the coding problem to be solved",
- )
- starter_code = st.text_input(
- "Started code: ", value="def greet(name):", help="Optional starter code"
- )
- submit_button = st.button("Solve")
-
- if submit_button:
- text = st.text("Generating solution...")
- # gif from https://giphy.com/gifs/alan-DfSXiR60W9MVq
- gif_runner = st.image("./loading.gif")
- output = generate_solution(
- model, tokenizer, question, starter_code, temperature, num_beams
- )
- text.empty()
- gif_runner.empty()
-
- st.text("Solution:")
- st.code(output, language="python")
-
- # Create link to carbon to make a nice screenshot of the generated code
- url_code = urllib.parse.quote(f"# {question}\n{output}")
- st.markdown(
- f"[Would you like a Carbon Copy?](https://carbon.now.sh/?bg=rgba%280%2C0%2C0%2C0%29&t=seti&wt=none&l=python&ds=false&dsyoff=20px&dsblur=68px&wc=true&wa=false&pv=56px&ph=56px&ln=false&fl=1&fm=Hack&fs=14px&lh=133%25&si=false&es=2x&wm=false&code={url_code})"
- )
-
-
-if __name__ == "__main__":
- run()
\ No newline at end of file
diff --git a/spaces/flowers-team/Interactive_DeepRL_Demo/js/ui_state/components/about_tab.js b/spaces/flowers-team/Interactive_DeepRL_Demo/js/ui_state/components/about_tab.js
deleted file mode 100644
index c1cab808bb91c444e053d77ea66e18523f39923c..0000000000000000000000000000000000000000
--- a/spaces/flowers-team/Interactive_DeepRL_Demo/js/ui_state/components/about_tab.js
+++ /dev/null
@@ -1,54 +0,0 @@
-import Component from '../lib/component.js';
-import store from '../store/index.js';
-
-/**
- * @classdesc UI component for "About..." tab.
- */
-export default class AboutTab extends Component {
-
- /**
- * @constructor
- */
- constructor() {
- super({
- store,
- element: document.querySelector('#about-tab'),
- eventName: 'aboutTabChange'
- });
- }
-
- /**
- * Renders the global UI elements.
- */
- render() {
- let dict = window.lang_dict[store.state.language]['aboutTab'];
-
- // Purpose section
- this.element.querySelector('#purpose-title').innerHTML = dict['purposeTitle'];
- this.element.querySelector('#purpose-text').innerHTML = dict['purposeText'];
-
- // RL section
- this.element.querySelector('#rl-title').innerHTML = dict['rlTitle'];
- this.element.querySelector('#rl-text').innerHTML = dict['rlText'];
-
- // DRL section
- this.element.querySelector('#drl-title').innerHTML = dict['drlTitle'];
- this.element.querySelector('#drl-text').innerHTML = dict['drlText'];
-
- // ACL section
- this.element.querySelector('#acl-title').innerHTML = dict['aclTitle'];
- this.element.querySelector('#acl-text').innerHTML = dict['aclText'];
-
- // About demo section
- this.element.querySelector('#about-demo-title').innerHTML = dict['aboutDemoTitle'];
- this.element.querySelector('#about-demo-text').innerHTML = dict['aboutDemoText'];
-
- // Credits section
- this.element.querySelector('#credits-title').innerHTML = dict['creditsTitle'];
- this.element.querySelector('#credits-text').innerHTML = dict['creditsText'];
-
- // References section
- this.element.querySelector('#references-title').innerHTML = dict['referencesTitle'];
- this.element.querySelector('#references-text').innerHTML = dict['referencesText'];
- }
-};
\ No newline at end of file
diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/backup_envs/helper.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/backup_envs/helper.py
deleted file mode 100644
index 0a3df741f3a6118297898574e4a7bf6921272038..0000000000000000000000000000000000000000
--- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/backup_envs/helper.py
+++ /dev/null
@@ -1,295 +0,0 @@
-import numpy as np
-
-from gym_minigrid.minigrid import *
-from gym_minigrid.register import register
-
-import time
-from collections import deque
-
-
-class Peer(NPC):
- """
- A dancing NPC that the agent has to copy
- """
-
- def __init__(self, color, name, env):
- super().__init__(color)
- self.name = name
- self.npc_dir = 1 # NPC initially looks downward
- self.npc_type = 0
- self.env = env
- self.npc_actions = []
- self.dancing_step_idx = 0
- self.actions = MiniGridEnv.Actions
- self.add_npc_direction = True
- self.available_moves = [self.rotate_left, self.rotate_right, self.go_forward, self.toggle_action]
-
- selected_door_id = self.env._rand_elem([0, 1])
- self.selected_door_pos = [self.env.door_pos_top, self.env.door_pos_bottom][selected_door_id]
- self.selected_door = [self.env.door_top, self.env.door_bottom][selected_door_id]
- self.joint_attention_achieved = False
-
- def can_overlap(self):
- # If the NPC is hidden, agent can overlap on it
- return self.env.hidden_npc
-
- def encode(self, nb_dims=3):
- if self.env.hidden_npc:
- if nb_dims == 3:
- return (1, 0, 0)
- elif nb_dims == 4:
- return (1, 0, 0, 0)
- else:
- return super().encode(nb_dims=nb_dims)
-
- def step(self):
-
- distance_to_door = np.abs(self.selected_door_pos - self.cur_pos).sum(-1)
-
- if all(self.front_pos == self.selected_door_pos) and self.selected_door.is_open:
- # in front of door
- self.go_forward()
-
- elif distance_to_door == 1 and not self.joint_attention_achieved:
- # before turning to the door look at the agent
- wanted_dir = self.compute_wanted_dir(self.env.agent_pos)
- act = self.compute_turn_action(wanted_dir)
- act()
- if self.is_eye_contact():
- self.joint_attention_achieved = True
-
- else:
- act = self.path_to_toggle_pos(self.selected_door_pos)
- act()
-
- # not really important as the NPC doesn't speak
- if self.env.hidden_npc:
- return None
-
-
-
-class HelperGrammar(object):
-
- templates = ["Move your", "Shake your"]
- things = ["body", "head"]
-
- grammar_action_space = spaces.MultiDiscrete([len(templates), len(things)])
-
- @classmethod
- def construct_utterance(cls, action):
- return cls.templates[int(action[0])] + " " + cls.things[int(action[1])] + " "
-
-
-class HelperEnv(MultiModalMiniGridEnv):
- """
- Environment in which the agent is instructed to go to a given object
- named using an English text string
- """
-
- def __init__(
- self,
- size=5,
- diminished_reward=True,
- step_penalty=False,
- knowledgeable=False,
- max_steps=20,
- hidden_npc=False,
- ):
- assert size >= 5
- self.empty_symbol = "NA \n"
- self.diminished_reward = diminished_reward
- self.step_penalty = step_penalty
- self.knowledgeable = knowledgeable
- self.hidden_npc = hidden_npc
-
- super().__init__(
- grid_size=size,
- max_steps=max_steps,
- # Set this to True for maximum speed
- see_through_walls=True,
- actions=MiniGridEnv.Actions,
- action_space=spaces.MultiDiscrete([
- len(MiniGridEnv.Actions),
- *HelperGrammar.grammar_action_space.nvec
- ]),
- add_npc_direction=True
- )
-
- print({
- "size": size,
- "diminished_reward": diminished_reward,
- "step_penalty": step_penalty,
- })
-
- def _gen_grid(self, width, height):
- # Create the grid
- self.grid = Grid(width, height, nb_obj_dims=4)
-
- # Randomly vary the room width and height
- width = self._rand_int(5, width+1)
- height = self._rand_int(5, height+1)
-
- self.wall_x = width-1
- self.wall_y = height-1
-
- # Generate the surrounding walls
- self.grid.wall_rect(0, 0, width, height)
-
- # add lava
- self.grid.vert_wall(width//2, 1, height - 2, Lava)
-
- # door top
- door_color_top = self._rand_elem(COLOR_NAMES)
- self.door_pos_top = (width-1, 1)
- self.door_top = Door(door_color_top, is_locked=True)
- self.grid.set(*self.door_pos_top, self.door_top)
-
- # switch top
- self.switch_pos_top = (0, 1)
- self.switch_top = Switch(door_color_top, lockable_object=self.door_top, locker_switch=True)
- self.grid.set(*self.switch_pos_top, self.switch_top)
-
- # door bottom
- door_color_bottom = self._rand_elem(COLOR_NAMES)
- self.door_pos_bottom = (width-1, height-2)
- self.door_bottom = Door(door_color_bottom, is_locked=True)
- self.grid.set(*self.door_pos_bottom, self.door_bottom)
-
- # switch bottom
- self.switch_pos_bottom = (0, height-2)
- self.switch_bottom = Switch(door_color_bottom, lockable_object=self.door_bottom, locker_switch=True)
- self.grid.set(*self.switch_pos_bottom, self.switch_bottom)
-
- # save to variables
- self.switches = [self.switch_top, self.switch_bottom]
- self.switches_pos = [self.switch_pos_top, self.switch_pos_bottom]
- self.door = [self.door_top, self.door_bottom]
- self.door_pos = [self.door_pos_top, self.door_pos_bottom]
-
- # Set a randomly coloured Dancer NPC
- color = self._rand_elem(COLOR_NAMES)
- self.peer = Peer(color, "Jill", self)
-
- # Place it on the middle right side of the room
- peer_pos = np.array((self._rand_int(width//2+1, width - 1), self._rand_int(1, height - 1)))
-
- self.grid.set(*peer_pos, self.peer)
- self.peer.init_pos = peer_pos
- self.peer.cur_pos = peer_pos
-
- # Randomize the agent's start position and orientation
- self.place_agent(size=(width//2, height))
-
- # Generate the mission string
- self.mission = 'watch dancer and repeat his moves afterwards'
-
- # Dummy beginning string
- self.beginning_string = "This is what you hear. \n"
- self.utterance = self.beginning_string
-
- # utterance appended at the end of each step
- self.utterance_history = ""
-
- # used for rendering
- self.conversation = self.utterance
- self.outcome_info = None
-
- def step(self, action):
- p_action = action[0]
- utterance_action = action[1:]
-
- obs, reward, done, info = super().step(p_action)
- self.peer.step()
-
- if np.isnan(p_action):
- pass
-
- if p_action == self.actions.done:
- done = True
-
- elif all(self.agent_pos == self.door_pos_top):
- done = True
-
- elif all(self.agent_pos == self.door_pos_bottom):
- done = True
-
- elif all([self.switch_top.is_on, self.switch_bottom.is_on]):
- # if both switches are on no reward is given and episode ends
- done = True
-
- elif all(self.peer.cur_pos == self.peer.selected_door_pos):
- reward = self._reward()
- done = True
-
- # discount
- if self.step_penalty:
- reward = reward - 0.01
-
- if self.hidden_npc:
- # all npc are hidden
- assert np.argwhere(obs['image'][:,:,0] == OBJECT_TO_IDX['npc']).size == 0
- assert "{}:".format(self.peer.name) not in self.utterance
-
- # fill observation with text
- self.append_existing_utterance_to_history()
- obs = self.add_utterance_to_observation(obs)
- self.reset_utterance()
-
- if done:
- if reward > 0:
- self.outcome_info = "SUCCESS: agent got {} reward \n".format(np.round(reward, 1))
- else:
- self.outcome_info = "FAILURE: agent got {} reward \n".format(reward)
-
- return obs, reward, done, info
-
- def _reward(self):
- if self.diminished_reward:
- return super()._reward()
- else:
- return 1.0
-
- def render(self, *args, **kwargs):
- obs = super().render(*args, **kwargs)
- self.window.clear_text() # erase previous text
-
- # self.window.set_caption(self.conversation, [self.peer.name])
- # self.window.ax.set_title("correct door: {}".format(self.true_guide.target_color), loc="left", fontsize=10)
- if self.outcome_info:
- color = None
- if "SUCCESS" in self.outcome_info:
- color = "lime"
- elif "FAILURE" in self.outcome_info:
- color = "red"
- self.window.add_text(*(0.01, 0.85, self.outcome_info),
- **{'fontsize':15, 'color':color, 'weight':"bold"})
-
- self.window.show_img(obs) # re-draw image to add changes to window
- return obs
-
-
-class Helper8x8Env(HelperEnv):
- def __init__(self, **kwargs):
- super().__init__(size=8, max_steps=20, **kwargs)
-
-
-class Helper6x6Env(HelperEnv):
- def __init__(self):
- super().__init__(size=6, max_steps=20)
-
-
-
-register(
- id='MiniGrid-Helper-5x5-v0',
- entry_point='gym_minigrid.envs:HelperEnv'
-)
-
-register(
- id='MiniGrid-Helper-6x6-v0',
- entry_point='gym_minigrid.envs:Helper6x6Env'
-)
-
-register(
- id='MiniGrid-Helper-8x8-v0',
- entry_point='gym_minigrid.envs:Helper8x8Env'
-)
diff --git a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/color_generator/run.py b/spaces/freddyaboulton/3.1.4.9-all-demos/demos/color_generator/run.py
deleted file mode 100644
index 3b18bd0839fba3897d92660a7eb8bd79d493d2f1..0000000000000000000000000000000000000000
--- a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/color_generator/run.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import gradio as gr
-import cv2
-import numpy as np
-import random
-
-
-# Convert decimal color to hexadecimal color
-def RGB_to_Hex(rgb):
- color = "#"
- for i in rgb:
- num = int(i)
- color += str(hex(num))[-2:].replace("x", "0").upper()
- return color
-
-
-# Randomly generate light or dark colors
-def random_color(is_light=True):
- return (
- random.randint(0, 127) + int(is_light) * 128,
- random.randint(0, 127) + int(is_light) * 128,
- random.randint(0, 127) + int(is_light) * 128,
- )
-
-
-def switch_color(color_style):
- if color_style == "light":
- is_light = True
- elif color_style == "dark":
- is_light = False
- back_color_ = random_color(is_light) # Randomly generate colors
- back_color = RGB_to_Hex(back_color_) # Convert to hexadecimal
-
- # Draw color pictures.
- w, h = 50, 50
- img = np.zeros((h, w, 3), np.uint8)
- cv2.rectangle(img, (0, 0), (w, h), back_color_, thickness=-1)
-
- return back_color, back_color, img
-
-
-inputs = [gr.Radio(["light", "dark"], value="light")]
-
-outputs = [
- gr.ColorPicker(label="color"),
- gr.Textbox(label="hexadecimal color"),
- gr.Image(type="numpy", label="color picture"),
-]
-
-title = "Color Generator"
-description = (
- "Click the Submit button, and a dark or light color will be randomly generated."
-)
-
-demo = gr.Interface(
- fn=switch_color,
- inputs=inputs,
- outputs=outputs,
- title=title,
- description=description,
-)
-
-if __name__ == "__main__":
- demo.launch()
diff --git a/spaces/frncscp/bullerengue/musika/musika_encode.py b/spaces/frncscp/bullerengue/musika/musika_encode.py
deleted file mode 100644
index 566d2b169219488258798a117c67325b518fcaf1..0000000000000000000000000000000000000000
--- a/spaces/frncscp/bullerengue/musika/musika_encode.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import os
-
-os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
-
-from parse.parse_encode import parse_args
-from models import Models_functions
-from utils_encode import UtilsEncode_functions
-
-if __name__ == "__main__":
-
- # parse args
- args = parse_args()
-
- # initialize networks
- M = Models_functions(args)
- M.download_networks()
- models_ls = M.get_networks()
-
- # encode samples
- U = UtilsEncode_functions(args)
- if args.whole:
- U.compress_whole_files(models_ls)
- else:
- U.compress_files(models_ls)
diff --git a/spaces/frncscp/bullerengue/musika/parse/parse_decode.py b/spaces/frncscp/bullerengue/musika/parse/parse_decode.py
deleted file mode 100644
index 8472d77c4a13d116417fb3a58469cf4a41fa8038..0000000000000000000000000000000000000000
--- a/spaces/frncscp/bullerengue/musika/parse/parse_decode.py
+++ /dev/null
@@ -1,210 +0,0 @@
-import argparse
-from typing import Any
-import tensorflow as tf
-
-
-class EasyDict(dict):
- def __getattr__(self, name: str) -> Any:
- try:
- return self[name]
- except KeyError:
- raise AttributeError(name)
-
- def __setattr__(self, name: str, value: Any) -> None:
- self[name] = value
-
- def __delattr__(self, name: str) -> None:
- del self[name]
-
-
-def str2bool(v):
- if isinstance(v, bool):
- return v
- if v.lower() in ("yes", "true", "t", "y", "1"):
- return True
- elif v.lower() in ("no", "false", "f", "n", "0"):
- return False
- else:
- raise argparse.ArgumentTypeError("Boolean value expected.")
-
-
-def params_args(args):
- parser = argparse.ArgumentParser()
-
- parser.add_argument(
- "--hop",
- type=int,
- default=256,
- help="Hop size (window size = 4*hop)",
- )
- parser.add_argument(
- "--mel_bins",
- type=int,
- default=256,
- help="Mel bins in mel-spectrograms",
- )
- parser.add_argument(
- "--sr",
- type=int,
- default=44100,
- help="Sampling Rate",
- )
- parser.add_argument(
- "--small",
- type=str2bool,
- default=False,
- help="If True, use model with shorter available context, useful for small datasets",
- )
- parser.add_argument(
- "--latdepth",
- type=int,
- default=64,
- help="Depth of generated latent vectors",
- )
- parser.add_argument(
- "--coorddepth",
- type=int,
- default=64,
- help="Dimension of latent coordinate and style random vectors",
- )
- parser.add_argument(
- "--max_lat_len",
- type=int,
- default=512,
- help="Length of latent sequences: a random on-the-fly crop will be used for training",
- )
- parser.add_argument(
- "--base_channels",
- type=int,
- default=128,
- help="Base channels for generator and discriminator architectures",
- )
- parser.add_argument(
- "--shape",
- type=int,
- default=128,
- help="Length of spectrograms time axis",
- )
- parser.add_argument(
- "--window",
- type=int,
- default=64,
- help="Generator spectrogram window (must divide shape)",
- )
- parser.add_argument(
- "--mu_rescale",
- type=float,
- default=-25.0,
- help="Spectrogram mu used to normalize",
- )
- parser.add_argument(
- "--sigma_rescale",
- type=float,
- default=75.0,
- help="Spectrogram sigma used to normalize",
- )
- parser.add_argument(
- "--files_path",
- type=str,
- default="audio_samples/",
- help="Path of compressed latent samples to decode",
- )
- parser.add_argument(
- "--save_path",
- type=str,
- default="decoded_samples/",
- help="Path where decoded audio files will be saved",
- )
- parser.add_argument(
- "--dec_path",
- type=str,
- default="checkpoints/ae",
- help="Path of pretrained decoders weights",
- )
- parser.add_argument(
- "--load_path",
- type=str,
- default="None",
- help="If not None, load models weights from this path",
- )
- parser.add_argument(
- "--base_path",
- type=str,
- default="checkpoints",
- help="Path where pretrained models are downloaded",
- )
- parser.add_argument(
- "--testing",
- type=str2bool,
- default=True,
- help="True if optimizers weight do not need to be loaded",
- )
- parser.add_argument(
- "--cpu",
- type=str2bool,
- default=False,
- help="True if you wish to use cpu",
- )
- parser.add_argument(
- "--mixed_precision",
- type=str2bool,
- default=True,
- help="True if your GPU supports mixed precision",
- )
-
- tmp_args = parser.parse_args()
-
- args.hop = tmp_args.hop
- args.mel_bins = tmp_args.mel_bins
- args.sr = tmp_args.sr
- args.small = tmp_args.small
- args.latdepth = tmp_args.latdepth
- args.coorddepth = tmp_args.coorddepth
- args.max_lat_len = tmp_args.max_lat_len
- args.base_channels = tmp_args.base_channels
- args.shape = tmp_args.shape
- args.window = tmp_args.window
- args.mu_rescale = tmp_args.mu_rescale
- args.sigma_rescale = tmp_args.sigma_rescale
- args.save_path = tmp_args.save_path
- args.files_path = tmp_args.files_path
- args.dec_path = tmp_args.dec_path
- args.load_path = tmp_args.load_path
- args.base_path = tmp_args.base_path
- args.testing = tmp_args.testing
- args.cpu = tmp_args.cpu
- args.mixed_precision = tmp_args.mixed_precision
-
- if args.small:
- args.latlen = 128
- else:
- args.latlen = 256
- args.coordlen = (args.latlen // 2) * 3
-
- print()
-
- args.datatype = tf.float32
- gpuls = tf.config.list_physical_devices("GPU")
- if len(gpuls) == 0 or args.cpu:
- args.cpu = True
- args.mixed_precision = False
- tf.config.set_visible_devices([], "GPU")
- print()
- print("Using CPU...")
- print()
- if args.mixed_precision:
- args.datatype = tf.float16
- print()
- print("Using GPU with mixed precision enabled...")
- print()
- if not args.mixed_precision and not args.cpu:
- print()
- print("Using GPU without mixed precision...")
- print()
-
- return args
-
-
-def parse_args():
- args = EasyDict()
- return params_args(args)
diff --git a/spaces/g4f/freegpt-webui/g4f/Provider/Providers/Phind.py b/spaces/g4f/freegpt-webui/g4f/Provider/Providers/Phind.py
deleted file mode 100644
index 9fa8ec821f701d7841432e498a11ac9dd017978c..0000000000000000000000000000000000000000
--- a/spaces/g4f/freegpt-webui/g4f/Provider/Providers/Phind.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import os
-import json
-import time
-import subprocess
-
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://phind.com'
-model = ['gpt-4']
-supports_stream = True
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
-
- path = os.path.dirname(os.path.realpath(__file__))
- config = json.dumps({
- 'model': model,
- 'messages': messages}, separators=(',', ':'))
-
- cmd = ['python', f'{path}/helpers/phind.py', config]
-
- p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
-
- for line in iter(p.stdout.readline, b''):
- if b'Just a moment...' in line:
- os.system('clear' if os.name == 'posix' else 'cls')
- yield 'Clouflare error, please try again...'
- os._exit(0)
-
- else:
- if b'ping - 2023-' in line:
- continue
-
- yield line.decode('cp1251') #[:-1]
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
\ No newline at end of file
diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/decode_heads/cascade_decode_head.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/decode_heads/cascade_decode_head.py
deleted file mode 100644
index d02122ca0e68743b1bf7a893afae96042f23838c..0000000000000000000000000000000000000000
--- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/decode_heads/cascade_decode_head.py
+++ /dev/null
@@ -1,57 +0,0 @@
-from abc import ABCMeta, abstractmethod
-
-from .decode_head import BaseDecodeHead
-
-
-class BaseCascadeDecodeHead(BaseDecodeHead, metaclass=ABCMeta):
- """Base class for cascade decode head used in
- :class:`CascadeEncoderDecoder."""
-
- def __init__(self, *args, **kwargs):
- super(BaseCascadeDecodeHead, self).__init__(*args, **kwargs)
-
- @abstractmethod
- def forward(self, inputs, prev_output):
- """Placeholder of forward function."""
- pass
-
- def forward_train(self, inputs, prev_output, img_metas, gt_semantic_seg,
- train_cfg):
- """Forward function for training.
- Args:
- inputs (list[Tensor]): List of multi-level img features.
- prev_output (Tensor): The output of previous decode head.
- img_metas (list[dict]): List of image info dict where each dict
- has: 'img_shape', 'scale_factor', 'flip', and may also contain
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
- For details on the values of these keys see
- `mmseg/datasets/pipelines/formatting.py:Collect`.
- gt_semantic_seg (Tensor): Semantic segmentation masks
- used if the architecture supports semantic segmentation task.
- train_cfg (dict): The training config.
-
- Returns:
- dict[str, Tensor]: a dictionary of loss components
- """
- seg_logits = self.forward(inputs, prev_output)
- losses = self.losses(seg_logits, gt_semantic_seg)
-
- return losses
-
- def forward_test(self, inputs, prev_output, img_metas, test_cfg):
- """Forward function for testing.
-
- Args:
- inputs (list[Tensor]): List of multi-level img features.
- prev_output (Tensor): The output of previous decode head.
- img_metas (list[dict]): List of image info dict where each dict
- has: 'img_shape', 'scale_factor', 'flip', and may also contain
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
- For details on the values of these keys see
- `mmseg/datasets/pipelines/formatting.py:Collect`.
- test_cfg (dict): The testing config.
-
- Returns:
- Tensor: Output segmentation map.
- """
- return self.forward(inputs, prev_output)
diff --git a/spaces/gordonchan/h2oo/enums.py b/spaces/gordonchan/h2oo/enums.py
deleted file mode 100644
index 2041b8c24f3bbb7bf0e368ebbdbc482adbb4da80..0000000000000000000000000000000000000000
--- a/spaces/gordonchan/h2oo/enums.py
+++ /dev/null
@@ -1,120 +0,0 @@
-from enum import Enum
-
-
-class PromptType(Enum):
- custom = -1
- plain = 0
- instruct = 1
- quality = 2
- human_bot = 3
- dai_faq = 4
- summarize = 5
- simple_instruct = 6
- instruct_vicuna = 7
- instruct_with_end = 8
- human_bot_orig = 9
- prompt_answer = 10
- open_assistant = 11
- wizard_lm = 12
- wizard_mega = 13
- instruct_vicuna2 = 14
- instruct_vicuna3 = 15
- wizard2 = 16
- wizard3 = 17
- instruct_simple = 18
- wizard_vicuna = 19
- openai = 20
- openai_chat = 21
- gptj = 22
- prompt_answer_openllama = 23
- vicuna11 = 24
- mptinstruct = 25
- mptchat = 26
- falcon = 27
- guanaco = 28
- llama2 = 29
-
-
-class DocumentSubset(Enum):
- Relevant = 0
- RelSources = 1
- TopKSources = 2
-
-
-non_query_commands = [
- DocumentSubset.RelSources.name,
- DocumentSubset.TopKSources.name
-]
-
-
-class DocumentChoice(Enum):
- ALL = 'All'
-
-
-class LangChainMode(Enum):
- """LangChain mode"""
-
- DISABLED = "Disabled"
- LLM = "LLM"
- ALL = "All"
- WIKI = "wiki"
- WIKI_FULL = "wiki_full"
- USER_DATA = "UserData"
- MY_DATA = "MyData"
- GITHUB_H2OGPT = "github h2oGPT"
- H2O_DAI_DOCS = "DriverlessAI docs"
-
-
-# modes should not be removed from visible list or added by name
-langchain_modes_intrinsic = [LangChainMode.DISABLED.value,
- LangChainMode.LLM.value,
- LangChainMode.MY_DATA.value]
-
-
-class LangChainAction(Enum):
- """LangChain action"""
-
- QUERY = "Query"
- # WIP:
- # SUMMARIZE_MAP = "Summarize_map_reduce"
- SUMMARIZE_MAP = "Summarize"
- SUMMARIZE_ALL = "Summarize_all"
- SUMMARIZE_REFINE = "Summarize_refine"
-
-
-class LangChainAgent(Enum):
- """LangChain agents"""
-
- SEARCH = "Search"
- # CSV = "csv" # WIP
-
-
-no_server_str = no_lora_str = no_model_str = '[None/Remove]'
-
-# from site-packages/langchain/llms/openai.py
-# but needed since ChatOpenAI doesn't have this information
-model_token_mapping = {
- "gpt-4": 8192,
- "gpt-4-0314": 8192,
- "gpt-4-32k": 32768,
- "gpt-4-32k-0314": 32768,
- "gpt-3.5-turbo": 4096,
- "gpt-3.5-turbo-16k": 16 * 1024,
- "gpt-3.5-turbo-0301": 4096,
- "text-ada-001": 2049,
- "ada": 2049,
- "text-babbage-001": 2040,
- "babbage": 2049,
- "text-curie-001": 2049,
- "curie": 2049,
- "davinci": 2049,
- "text-davinci-003": 4097,
- "text-davinci-002": 4097,
- "code-davinci-002": 8001,
- "code-davinci-001": 8001,
- "code-cushman-002": 2048,
- "code-cushman-001": 2048,
-}
-
-source_prefix = "Sources [Score | Link]:"
-source_postfix = "End Sources
"
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Fantastic Beasts And Where To Find Them English Book In Telugu Download Movies A Guide for Harry Potter Fans.md b/spaces/gotiQspiryo/whisper-ui/examples/Fantastic Beasts And Where To Find Them English Book In Telugu Download Movies A Guide for Harry Potter Fans.md
deleted file mode 100644
index df317c733d856171642e2b44d07989c0abb8ce14..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Fantastic Beasts And Where To Find Them English Book In Telugu Download Movies A Guide for Harry Potter Fans.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
Internet Archive is a digital library with a large collection of free movies, books, audio files, images, etc. You can also find movie audio tracks in this website or download movies in MP3 format. To find the movie audio track, type the movie title in the search field, open it and click MP3 under the DOWNLOAD OPTIONS.
-
Fantastic Beasts And Where To Find Them English Book In Telugu Download Movies
The HP movie series are an adaptation of the fantasy novels by the same name written by J.K. Rowling, a British author. The Harry Potter books and movies revolve around the main character Harry Potter (played by the actor Daniel Radcliffe), a young orphaned boy, who finds out that he is a wizard on his eleventh birthday. After which, he is admitted to the Hogwarts School of Witchcraft and Wizardry, where he unveils the truth about his parents' death. He then, along with his friends, sets on an adventure full of mysteries, suspense, friendships, games, family, and so much more. While at it, they plan to ultimately defeat Lord Voldemort, who is shown as the most powerful and undefeatable dark wizard in ages.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Fresco Logic Usb 3.0 Driver For Mac Palyginkite skirtingus USB 3.0 VGAHDMI lustus ir j privalumus.md b/spaces/gotiQspiryo/whisper-ui/examples/Fresco Logic Usb 3.0 Driver For Mac Palyginkite skirtingus USB 3.0 VGAHDMI lustus ir j privalumus.md
deleted file mode 100644
index f518341cfbd457b334932657577b0307a9758fac..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Fresco Logic Usb 3.0 Driver For Mac Palyginkite skirtingus USB 3.0 VGAHDMI lustus ir j privalumus.md
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
Device Manager is a tool found in all Windows computers to help users download and install updated drivers. Below is how to use it to download and install the update for Windows 11/10 Fresco Logic USB VGA display driver.
Above we elucidated all the manual methods to download the Fresco Logic USB VGA display driver for Windows 10/11. These manual ways are complicated and a bit time-consuming. Hence, to save you precious time and effort, we recommend downloading and installing the updated drivers via a program like Bit Driver Updater.
-
After you are done downloading and installing Bit Driver Updater, wait for two to three seconds to get a list of problematic drivers. Once you know which drivers are outdated, you may Update All these outdated drivers with a single click on the designated button to do it.
-
The above was all about downloading and installing the updated Fresco Logic USB display driver for Windows 11/10. Now, you might be interested in knowing what to do if your USB display driver is not working for any reason. Well, as a bonus for our readers, we talk about the same in the following section of this article.
-
The driver might not be installed correctly if you tried to do the installation manually. Hence, you may try reinstalling the Fresco Logic USB display driver if it is not working. Below is the process to do it.
-
Outdated drivers are among the most prominent causes of almost all problems like the Fresco Logic USB VGA display driver not working. Hence, you may follow the above guide to download the driver update and install it.
-
-
This article communicated the best possible methods to download and install the updated Fresco Logic USB display driver for Windows 11/10. Moreover, we also discussed how to fix the issues if the driver is not working.
-
When I rebooted my mac, it hung up on the boot page with the apple logo. The boot process hangs even though the loading bar is completely filled. I can't start my computer now so I'm trying to delete the driver via recovery mode. I think it is one of the kext files in Library/Extensions, but there are so many and none of them say frescologic.
-
My macbook pro is a 2015 model with Mac OS Mojave installed. Unfortunately I don't have a filevault backup so uninstalling the driver is my only hope of recovering my work. Is there an easy way to identify this kernel extension and am I even looking in the right place?
-
I've been looking everywhere for a driver or chipset info for the unbranded "Mini HD USB 3.0 HDMI Adapter" for years. I finally dug though enough duck duck go results to find a page that claims it uses the Fresco logic USB display driver, which brought me here.
-
This driver is tested on Ubuntu 14 LTS as well as some Android platforms with kernel version 3.10.x. This driver source might not compile on newer kernels (eg. 4.0 or above) because of the fast-moving API changes in the mainstream kernel. You might need to adapt it for your own use.
-
thank!! happened)) communication with you helped me remember another way) Device manager > errror fresco > Update driver > Browse my computer > Let me pick from a list of device driver> Have Disk...> mu fresco driver))
-
(c) Download, decompress the Fresco Logic registry fix. Insert the fix into your registry, by double clicking it on windows. This registry fix only inserts a key used by the FL driver to disable the U1/U2 power states I think, but you are using this at your own risk.
-
I started trying out various Fresco Logic drivers I could get my hands on. I have tried, without success, versions: 3.5.93.0, 3.5.88.0, 3.5.73.0, 3.5.46.0, 3.5.30.0, 3.5.24, 3.5.2.0. Version 3.5.24 seemed to be working, but when I pushed the drive by transferring lots of data simultaneously, the device disconnected.
-
I have contacted AsRock Support (yet again) and these guys have been great in sending me driver v3.0.100.58 which actually works!! To save the day for anyone with the same problem I have made a backup of this working driver!
-
I bet you tried hard and I know how it feels; I got similar treatment at the time. Fresco Logic (the controller manufacturer of my computer) just ignored my very specific requests for support; probably they knew already the problem existed with their drivers; not sure.
-
Back to your problem. Given that you tried the WD on other laptops and it works without problems that does point the finger to the Asus laptop. However, do double-check that the different laptops did have the same OS (e.g. win 10) with your Asus, so you can have a fair field to compare, at least as far as the OS goes. Beyond that, you could try to bypass the Asus driver and install the Intel USB drivers from Intel. Maybe you can work with this: -USB-3-0-eXtensible-Host-Controller-Driver, otherwise try to locate which drivers are compatible with that of your laptop. See if you can replace the Asus-supplied driver and helps solve your problem (at your own risk of course :) )
-
Thank you for sharing your extra efforts on this problem. I had the exact same problem with a WD My Passport 1TB USB 3.0 Portable hard drive and my Fresco FL1000 USB3.0 driver on Windows. It is working perfectly now!
-
You saved my day, or rather my evening. Have the exact same wd passport as you and had to play detective for about an hour to conclude it must be a problem with the usb 3.0, Fresco, just as yours. And your driver WORKS. Excellent ,and thank you.
-
Workaround (a) I think implied that you will probably use a well tested, quality USB 3.0 controller coupled with good drivers, so you are indeed changing hardware configuration and there are plenty of those that we know that the drive works. As for the Y-cable, I think it is a good solution for laptops that are known to have issues with power on their usb ports, but for desktops with on-board usb3.0 ports, that should not be an issue, although for some people that can be the case. Not for me.
-
Drivers installed are the official ones provided by Asus : _Fresco_Win8_64_Z35730.zip
As you can see, drivers are provided with a patch, but I'm not sure if it's really installed, the batch file doesn't seem to return anything.
Any help will be appreciated :)
PS: I think that guy pointed out something related to my issue : -US/a3748df9-18bf-48b7-a834-a99c9de84e3b/2-problems-after-updating-to-windows-81?forum=w8itprohardware&prof=required
But I'm not sure what I can do with this.
-
Thank you for your reply,
Here's Asus support answer :
Dear sir,
Thank you for your e-mail. In this case i would advise you to make use of the "Go Back function" in windows 10. Since your system has been delivered with Windows 7, we do not provide this system with drivers for Windows 10 yet. Probably Windows updates will provide this device with drivers through windows updates in the future. We would advise you to go back to your previous Windows OS till your system will be compatible with Windows 10.
A/V is uninstalled and SFC said it fixed some things but issue is still there.
Kind regards
-
The driver roll-back that I mentioned in my previous post was the solution to the problem. Crazy that I had to roll back to a driver that was already outdated when my computer was released but that is what fixed the problem. Anyone having similar problems should check drivers and consider rolling back to the earliest available driver they can find. Test it then update one version at a time until it stops working. Go back to the last version and leave it at that.
-
The fix for me has been to update the Fresco Logic xHCI (USB3) Controller driver to version 3.5.30.0 - but this is not the newest version, and as kyroguy pointed out, newer versions of the driver caused the same problem. So it seems there is a narrow range of Fresco Logic driver versions that properly support this drive - older versions and newer versions of the driver do not work.
-
It sounds like you may be experiencing a device-specific issue associated with power management. Power management has been enabled more aggressively in these recent drivers, which would explain why you are seeing this issue with our most up-to-date software.
-
To update your USB drivers in Windows 10, go to Settings > Update & Security > Windows Update, then click Check for Updates. Windows will search for available updates, including driver updates. Alternatively, navigate to Device Manager and click Universal Serial Bus Controllers. Right-click the device you're having an issue with and select Update Driver.
-
To reinstall a USB driver, navigate to Device Manager, right-click the name of the device you're having an issue with, and select Uninstall. Restart your PC, and Windows will automatically reinstall the driver.
-
To uninstall USB drivers, navigate to Device Manager, click the View menu, and enable Show Hidden Devices. Find the type of device you're dealing with, then expand the menu, right-click your device, and select Uninstall. In the confirmation dialog, click Delete the driver software for this device > OK.
-
To uninstall the Fresco Logic USB display driver, you can go to the Add/Remove Program feature in Windows Control Panel. In Windows Vista/7/8/10, click the Add or Remove Programs tab and select Uninstall a program. On Windows XP, click the Add or Remove Programs tab and select Change/Remove. The removal process will begin, and a progress bar will display how long it will take. This driver runs on Windows OS releases and PC manufacturers install it on their systems.
-
First, uninstall the Fresco Logic VGA Display Driver from your computer by following the instructions provided by the computer manufacturer. In most cases, the driver is located in C: Program Files (x86) and can be easily removed with the Control Panel or by using CMD on your Mac. Alternatively, you can also manually uninstall the Fresco logic USB display driver from your computer by launching the command prompt in your operating system.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Mission Impossible 1988 Season 1 DVDRip XviD-48 ((TOP)).md b/spaces/gotiQspiryo/whisper-ui/examples/Mission Impossible 1988 Season 1 DVDRip XviD-48 ((TOP)).md
deleted file mode 100644
index 858dc119afe09bcf8a18716a7d153e335d0e8f7c..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Mission Impossible 1988 Season 1 DVDRip XviD-48 ((TOP)).md
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
-kuyhAa.Me -All in One Runtimes 2.4.8 Terbaru merupakan kumpulan software atau program pendukung ketika windows ada berusaha ... 4d29de3e1b
-
-
-
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Ab Bulk Mailer 8 5 License Ndb Decommissioning.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Ab Bulk Mailer 8 5 License Ndb Decommissioning.md
deleted file mode 100644
index b22be5948d4b63083c40322a7d7bdbf261e83f1b..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Ab Bulk Mailer 8 5 License Ndb Decommissioning.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
In June 2017, the Department of Health settled an enforcement action brought by the New York State Attorney General’s Office, agreeing to assess a total of nearly $1.7 million in penalties and fees against Holtec. The Department was also directed to send Holtec’s payments to the New York State Comptroller for deposit in a Nuclear decommissioning fund and account. Holtec agreed to provide quarterly reports on activities at Indian Point for this year and the following three years.
-
The Indian Point NRC license was issued by the US Nuclear Regulatory Commission (NRC) in 1994 and renewed in 2010 and 2014. New York state intervened on Indian Point on June 28, 2013, because the NRC has failed to consider the risks of a post-decommissioning nuclear waste repository at Indian Point and the surrounding area. State officials have prepared a plan for dealing with the waste from the site. The plan included a formal letter of objection to the license transfer. Following the 2015 license transfer, NY officials opposed the transfer on safety grounds. The New York State Court of Appeals intervened, claiming that the NRC improperly delegated its responsibility to New York State and the court has questioned the government s motives for license transfer. In April 2017, the NRC denied NY s request to hold a contested case hearing. The NRC has ignored significant safety, financial and engineering issues. NY and the court are fighting to turn back the license transfer. Holtec is undeterred. Last week, in an action that was not informed to the court, the New York State Department of Health suspended the license for daily patient access to Indian Point by workers and patients to minimize the risk of radiation exposure and to reduce the spread of the Covid-19 pandemic. This disruption was evidently costly because the suspension is still in place and worker access is limited to primarily freight delivery and collection services.
-
-axioo fw01 driver
-About the application Show your bonus card with every order in our establishment and get points.
-For the accumulated points you can get nice gifts!
-You can receive your first gift on your next visit.
-About us The network of Japanese restaurants "Eurasia" invites you to visit restaurants and enjoy Japanese cuisine in the city of St. Petersburg.
-Our restaurants are created 8a78ff9644
-
-
-
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/DAEMON Tools Ultra 5.7.0 Crack With Key [Full Version].md b/spaces/inplisQlawa/anything-midjourney-v4-1/DAEMON Tools Ultra 5.7.0 Crack With Key [Full Version].md
deleted file mode 100644
index d1b0473d00d66fd0a3f67f6b0e5e0bebf8565653..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/DAEMON Tools Ultra 5.7.0 Crack With Key [Full Version].md
+++ /dev/null
@@ -1,30 +0,0 @@
-
-Here is what I created:
-
-
DAEMON Tools Ultra 5.7.0 Crack With Key [Full Version]
-
DAEMON Tools Ultra is a powerful and versatile software that allows you to create, mount, and manage virtual drives and disc images. With DAEMON Tools Ultra, you can create bootable USB sticks, virtual hard disks, and RAM disks, as well as emulate various types of optical drives and discs.
-
DAEMON Tools Ultra 5.7.0 Crack With Key [Full Version]
If you want to enjoy the full features of DAEMON Tools Ultra, you need to activate it with a valid license key. However, some people may try to use a cracked version of the software, which can expose them to various risks and problems. In this article, we will explain why you should avoid using DAEMON Tools Ultra 5.7.0 crack with key, and how you can get a legitimate copy of the software.
-
Why You Should Avoid Using DAEMON Tools Ultra 5.7.0 Crack With Key
-
Using a cracked version of DAEMON Tools Ultra may seem like a tempting option, especially if you don't want to pay for the software. However, there are many reasons why you should avoid doing so, such as:
-
-
It is illegal. Cracking software is a form of piracy, which violates the intellectual property rights of the developers and distributors of the software. By using a cracked version of DAEMON Tools Ultra, you are breaking the law and risking legal consequences.
-
It is unsafe. Cracked software often comes from untrusted sources, such as torrent sites or shady websites. These sources may contain malware, viruses, spyware, or other harmful programs that can infect your computer and compromise your security and privacy. You may also expose your personal data and sensitive information to hackers and cybercriminals.
-
It is unreliable. Cracked software may not work properly or at all, as it may have errors, bugs, or compatibility issues. You may also miss out on important updates and patches that fix bugs and improve performance and stability. Moreover, you may not be able to access customer support or technical assistance if you encounter any problems with the software.
-
It is unethical. Cracking software is a form of stealing, which harms the developers and distributors of the software who invest time, money, and effort into creating and maintaining it. By using a cracked version of DAEMON Tools Ultra, you are depriving them of their rightful income and discouraging them from developing more quality products in the future.
-
-
How You Can Get a Legitimate Copy of DAEMON Tools Ultra
-
If you want to use DAEMON Tools Ultra without any risks or problems, you should get a legitimate copy of the software from the official website: https://www.daemon-tools.cc/products/dtultra.
-
On the website, you can choose from different plans and pricing options that suit your needs and budget. You can also download a free trial version of the software that lets you test its features for 14 days.
-
-
By getting a legitimate copy of DAEMON Tools Ultra, you can enjoy the following benefits:
-
-
It is legal. You will have a valid license key that proves your ownership and authorization to use the software. You will not violate any laws or regulations by using the software.
-
It is safe. You will download the software from a trusted source that guarantees its quality and security. You will not expose your computer or data to any malware or threats by using the software.
-
It is reliable. You will get the latest version of the software that works smoothly and efficiently. You will also receive regular updates and patches that fix bugs and improve performance and stability. Moreover, you will be able to access customer support and technical assistance if you encounter any problems with the software.
-
It is ethical. You will support the developers and distributors of the software who deserve to be rewarded for their work and innovation. You will also encourage them to continue creating and improving more quality products in the future.
-
-
Conclusion
-
DAEMON Tools Ultra is a great software that allows you to create, mount
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Hindi Laghu Natika Script [VERIFIED] Download Pdf.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Hindi Laghu Natika Script [VERIFIED] Download Pdf.md
deleted file mode 100644
index f65029fce5fcb667de737a74752fffc8f463b00e..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Hindi Laghu Natika Script [VERIFIED] Download Pdf.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
if you still have trouble after using these steps, you can always try to use different software which is more compatible with your pc. for example, you can try to use a free software such as serial number generatorfrom http://assetgeneratorz.com/ . asset generator is a powerful software used to generate serial numbers, product keys, and license keys for anti-virus, instant messaging, video editors, music editors, etc. this software supports all versions of windows os, including windows 7, windows 8, windows 10, etc.generate serial numbers, product keys, and license keys for anti-virus, instant messaging, video editors, music editors, etc.
autodesk 3ds max 2014 crack serial key provides a comprehensive, integrated 3d modeling, animation, and rendering solution for game developers, visual effects artists, and graphic designers. it has grown to be one of the top 3d animation software options, focused on providing a powerful modeling architecture for graphic designers. the new version includes a host of new features and several clever improvements to the existing tools.
-
software registration is required to access autodesk and autodesk application network services. registration provides you with access to the benefits of the network such as technical support, automatic updates, free education, online tutorials and the ability to redeem a certificate for a free license to autodesk products.
-
autodesk uses different types of license keys depending on the version of autodesk software that you purchase. if you are using a single license for autodesk software as part of a product suite, you will be prompted to follow the instructions in that suite on how to obtain a license for the software as part of that suite.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/inreVtussa/clothingai/Examples/Autodesk Autocad 2015 Keygen Fix Tor.md b/spaces/inreVtussa/clothingai/Examples/Autodesk Autocad 2015 Keygen Fix Tor.md
deleted file mode 100644
index e1997118d5f16f533a3b8e10f3a5e3d9b41d1a0a..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Autodesk Autocad 2015 Keygen Fix Tor.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
to set the font to unicode, go to the options dialog box, choose view, and click modify. in the modify font dialog box: in the select font dialog box, set the type to unicode and click ok. autocad will create the font file with the special "%c" character set in the font name, but for this release that character set does not appear in the font file.
a primary benefit of unicode is that it uses a universal coding system with a unique string for every character ensuring consistency regardless of platform, software, or language. before unicode was supported by autocad, control codes were used for some of the most common drawing symbols including the diameter symbol in autocad. those control codes are still supported by autocad and can be used in single- and multi-line text. the autocad diameter symbol code is %%c. you may ask why not %%d well, the equally popular degree symbol gets that honor. visit the autocad help for a l ist of additional control codes.
-
the following is a listing of all of the autodesk applications:
autocad - autodesk's foremost application in design. use it to create drawings, 3d models, and animations.
autocad lt - the lightweight version of the application, offers some of the same functionality as autocad.
autocad classic - a legacy version that works under dos and windows 9x.
autocad 2009 - this is the last version of autocad to run on windows xp.
autocad 2010 - this is the first version of autocad to run on windows 7.
autocad lt 2010 - the lightweight version of autocad, the light engine of autocad, offers some of the same functionality as autocad.
autocad 2011 - this is the first version of autocad to run on windows 8.
autocad 2013 - this is the first version of autocad to run on windows 8.1.
autocad 2014 - this is the first version of autocad to run on windows 10.
autocad light table - this is the first version of autocad to run on mac os x.
autocad lt 2014 - the lightweight version of autocad, the light engine of autocad, offers some of the same functionality as autocad.
autocad classic 2013 - the legacy version of autocad, works under dos and windows 9x.
autocad classic 2010 - the legacy version of autocad, works under windows nt.
autocad lt 2009 - the lightweight version of autocad, the light engine of autocad, offers some of the same functionality as autocad.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/inreVtussa/clothingai/Examples/Conduct Certificate Format Tamil Nadu Pdf 80.md b/spaces/inreVtussa/clothingai/Examples/Conduct Certificate Format Tamil Nadu Pdf 80.md
deleted file mode 100644
index 0316b87a2dbfb343fca766ba1b8ebe472ba90964..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Conduct Certificate Format Tamil Nadu Pdf 80.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-33-A Tamil Nadu State Co-operative Societies Election Commission. 34. ... society. 150. Powers of Registrar to issue certificate for recovery of sums due ... resolution to conduct the affairs of each of the new societies for a period of three months from ... any form whatsoever in the assets or profits of the registered society. 23. 4d29de3e1b
-
-
-
diff --git a/spaces/ismot/1702t1/utils/logger.py b/spaces/ismot/1702t1/utils/logger.py
deleted file mode 100644
index 0f2e4dc66099c7e4784e37ab924e8594ffa03e27..0000000000000000000000000000000000000000
--- a/spaces/ismot/1702t1/utils/logger.py
+++ /dev/null
@@ -1,49 +0,0 @@
-"""
-@Date: 2021/07/17
-@description:
-"""
-import os
-import sys
-import logging
-import functools
-from termcolor import colored
-
-
-def build_logger(config):
- output_dir = config.LOGGER.DIR
- local_rank = config.LOCAL_RANK
- name = config.MODEL.NAME
- logger = get_logger(output_dir, local_rank, name)
- return logger
-
-
-@functools.lru_cache()
-def get_logger(output_dir=None, local_rank=None, name="PLTNet"):
- if output_dir and not os.path.exists(output_dir):
- os.makedirs(output_dir)
-
- # create logger
- logger = logging.getLogger(name)
- logger.setLevel(logging.DEBUG)
- logger.propagate = False
-
- # create formatter
- fmt = f'[%(asctime)s %(name)s][%(levelname)1.1s](%(filename)s %(lineno)d): %(message)s'
- color_fmt = colored(f'[%(asctime)s %(name)s][%(levelname)1.1s][{local_rank}]', 'green') + colored(
- f'(%(filename)s %(lineno)d)',
- 'yellow') + ': %(message)s'
- if local_rank in [0] or local_rank is None:
- console_handler = logging.StreamHandler(sys.stdout)
- console_handler.setLevel(logging.DEBUG)
- console_handler.setFormatter(
- logging.Formatter(fmt=color_fmt, datefmt='%Y-%m-%d %H:%M:%S'))
- logger.addHandler(console_handler)
-
- if output_dir is not None:
- # create file handlers
- file_handler = logging.FileHandler(os.path.join(output_dir, f'log_rank{local_rank}.log'), mode='a')
- file_handler.setLevel(logging.DEBUG)
- file_handler.setFormatter(logging.Formatter(fmt=fmt, datefmt='%Y-%m-%d %H:%M:%S'))
- logger.addHandler(file_handler)
-
- return logger
diff --git a/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/analysis/SuppleFig1a_Distribution.py b/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/analysis/SuppleFig1a_Distribution.py
deleted file mode 100644
index 7f56675be57834afb1e72051cc8d21fc611d9960..0000000000000000000000000000000000000000
--- a/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/analysis/SuppleFig1a_Distribution.py
+++ /dev/null
@@ -1,61 +0,0 @@
-#!/usr/bin/python
-# coding: utf-8
-
-# Author: LE YUAN
-
-import csv
-import json
-import numpy as np
-import pandas as pd
-import seaborn as sns
-import matplotlib.pyplot as plt
-from matplotlib import rc
-
-
-with open('../../Data/database/Kcat_combination_0918.json', 'r') as infile :
- entries = json.load(infile)
-
-print(len(entries))
-
-Kcat = [float(entry['Value']) for entry in entries]
-
-plt.figure(figsize=(3,3))
-
-# To solve the 'Helvetica' font cannot be used in PDF file
-# https://stackoverflow.com/questions/59845568/the-pdf-backend-does-not-currently-support-the-selected-font
-rc('font',**{'family':'serif','serif':['Helvetica']})
-plt.rcParams['pdf.fonttype'] = 42
-
-# plt.axes([0.12,0.12,0.83,0.83])
-
-plt.tick_params(direction='in')
-plt.tick_params(which='major',length=1.5)
-plt.tick_params(which='major',width=0.4)
-
-plt.hist(Kcat,5000,color='#2166ac')
-plt.xlabel('$k$$_\mathregular{cat}$ value', fontsize=7)
-plt.ylabel('Counts', fontsize=7)
-
-plt.rcParams['font.family'] = 'Helvetica'
-
-# plt.xlim(0,500000)
-# plt.xticks([0,10,100,1000,10000,100000])
-
-ax = plt.gca()
-ax.spines['bottom'].set_linewidth(0.5)
-ax.spines['left'].set_linewidth(0.5)
-ax.spines['top'].set_linewidth(0.5)
-ax.spines['right'].set_linewidth(0.5)
-
-plt.yscale('log')
-plt.xscale('log')
-
-plt.xticks(fontsize=6)
-plt.yticks(fontsize=6)
-
-plt.tight_layout()
-
-plt.savefig("../../Results/figures/SuppleFig1a.pdf", dpi=400)
-
-
-
diff --git a/spaces/jiejiejie0420/bingo/src/components/theme-toggle.tsx b/spaces/jiejiejie0420/bingo/src/components/theme-toggle.tsx
deleted file mode 100644
index 67d3f1a2c163ccbeb52c40a7e42f107190237154..0000000000000000000000000000000000000000
--- a/spaces/jiejiejie0420/bingo/src/components/theme-toggle.tsx
+++ /dev/null
@@ -1,31 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import { useTheme } from 'next-themes'
-
-import { Button } from '@/components/ui/button'
-import { IconMoon, IconSun } from '@/components/ui/icons'
-
-export function ThemeToggle() {
- const { setTheme, theme } = useTheme()
- const [_, startTransition] = React.useTransition()
-
- return (
-
- )
-}
diff --git a/spaces/jmesikto/whisper-webui/app-local.py b/spaces/jmesikto/whisper-webui/app-local.py
deleted file mode 100644
index c7717d096ca5f95177f0dba03cd62ca729bae9f3..0000000000000000000000000000000000000000
--- a/spaces/jmesikto/whisper-webui/app-local.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Run the app with no audio file restrictions
-from app import create_ui
-from src.config import ApplicationConfig
-
-create_ui(ApplicationConfig.create_default(input_audio_max_duration=-1))
\ No newline at end of file
diff --git a/spaces/joaogabriellima/Real-Time-Voice-Cloning/encoder/data_objects/__init__.py b/spaces/joaogabriellima/Real-Time-Voice-Cloning/encoder/data_objects/__init__.py
deleted file mode 100644
index ef04ade68544d0477a7f6deb4e7d51e97f592910..0000000000000000000000000000000000000000
--- a/spaces/joaogabriellima/Real-Time-Voice-Cloning/encoder/data_objects/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from encoder.data_objects.speaker_verification_dataset import SpeakerVerificationDataset
-from encoder.data_objects.speaker_verification_dataset import SpeakerVerificationDataLoader
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/C_F_F_.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/C_F_F_.py
deleted file mode 100644
index c231599e37b3a5864a774387d717baf297957876..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/C_F_F_.py
+++ /dev/null
@@ -1,46 +0,0 @@
-from io import BytesIO
-from fontTools import cffLib
-from . import DefaultTable
-
-
-class table_C_F_F_(DefaultTable.DefaultTable):
- def __init__(self, tag=None):
- DefaultTable.DefaultTable.__init__(self, tag)
- self.cff = cffLib.CFFFontSet()
- self._gaveGlyphOrder = False
-
- def decompile(self, data, otFont):
- self.cff.decompile(BytesIO(data), otFont, isCFF2=False)
- assert len(self.cff) == 1, "can't deal with multi-font CFF tables."
-
- def compile(self, otFont):
- f = BytesIO()
- self.cff.compile(f, otFont, isCFF2=False)
- return f.getvalue()
-
- def haveGlyphNames(self):
- if hasattr(self.cff[self.cff.fontNames[0]], "ROS"):
- return False # CID-keyed font
- else:
- return True
-
- def getGlyphOrder(self):
- if self._gaveGlyphOrder:
- from fontTools import ttLib
-
- raise ttLib.TTLibError("illegal use of getGlyphOrder()")
- self._gaveGlyphOrder = True
- return self.cff[self.cff.fontNames[0]].getGlyphOrder()
-
- def setGlyphOrder(self, glyphOrder):
- pass
- # XXX
- # self.cff[self.cff.fontNames[0]].setGlyphOrder(glyphOrder)
-
- def toXML(self, writer, otFont):
- self.cff.toXML(writer)
-
- def fromXML(self, name, attrs, content, otFont):
- if not hasattr(self, "cff"):
- self.cff = cffLib.CFFFontSet()
- self.cff.fromXML(name, attrs, content, otFont)
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ufoLib/pointPen.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ufoLib/pointPen.py
deleted file mode 100644
index 3433fdbc96cc68505a999f20919387b0d2acf31f..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ufoLib/pointPen.py
+++ /dev/null
@@ -1,5 +0,0 @@
-"""DEPRECATED - This module is kept here only as a backward compatibility shim
-for the old ufoLib.pointPen module, which was moved to fontTools.pens.pointPen.
-Please use the latter instead.
-"""
-from fontTools.pens.pointPen import *
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/transaction.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/transaction.py
deleted file mode 100644
index d0b5dc91990ac75b7165d36a384282249ee644ab..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/transaction.py
+++ /dev/null
@@ -1,81 +0,0 @@
-class Transaction:
- """Filesystem transaction write context
-
- Gathers files for deferred commit or discard, so that several write
- operations can be finalized semi-atomically. This works by having this
- instance as the ``.transaction`` attribute of the given filesystem
- """
-
- def __init__(self, fs):
- """
- Parameters
- ----------
- fs: FileSystem instance
- """
- self.fs = fs
- self.files = []
-
- def __enter__(self):
- self.start()
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- """End transaction and commit, if exit is not due to exception"""
- # only commit if there was no exception
- self.complete(commit=exc_type is None)
- self.fs._intrans = False
- self.fs._transaction = None
-
- def start(self):
- """Start a transaction on this FileSystem"""
- self.files = [] # clean up after previous failed completions
- self.fs._intrans = True
-
- def complete(self, commit=True):
- """Finish transaction: commit or discard all deferred files"""
- for f in self.files:
- if commit:
- f.commit()
- else:
- f.discard()
- self.files = []
- self.fs._intrans = False
-
-
-class FileActor:
- def __init__(self):
- self.files = []
-
- def commit(self):
- for f in self.files:
- f.commit()
- self.files.clear()
-
- def discard(self):
- for f in self.files:
- f.discard()
- self.files.clear()
-
- def append(self, f):
- self.files.append(f)
-
-
-class DaskTransaction(Transaction):
- def __init__(self, fs):
- """
- Parameters
- ----------
- fs: FileSystem instance
- """
- import distributed
-
- super().__init__(fs)
- client = distributed.default_client()
- self.files = client.submit(FileActor, actor=True).result()
-
- def complete(self, commit=True):
- """Finish transaction: commit or discard all deferred files"""
- if commit:
- self.files.commit().result()
- else:
- self.files.discard().result()
- self.fs._intrans = False
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/query/query_transform.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/query/query_transform.py
deleted file mode 100644
index ce6cc2bd4cb4194c37f72b2bc0ca0d4475c04a95..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/query/query_transform.py
+++ /dev/null
@@ -1,66 +0,0 @@
-"""Query transform."""
-
-from typing import Optional
-
-from gpt_index.indices.query.schema import QueryBundle
-from gpt_index.langchain_helpers.chain_wrapper import LLMPredictor
-from gpt_index.prompts.base import Prompt
-from gpt_index.prompts.default_prompts import DEFAULT_HYDE_PROMPT
-
-
-class BaseQueryTransform:
- """Base class for query transform.
-
- A query transform augments a raw query string with associated transformations
- to improve index querying.
- """
-
- def __call__(self, query_str: str) -> QueryBundle:
- """Run query processor."""
- return QueryBundle(query_str=query_str, custom_embedding_strs=[query_str])
-
-
-class HyDEQueryTransform(BaseQueryTransform):
- """Hypothetical Document Embeddings (HyDE) query transform.
-
- It uses an LLM to generate hypothetical answer(s) to a given query,
- and use the resulting documents as embedding strings.
-
- As described in `[Precise Zero-Shot Dense Retrieval without Relevance Labels]
- (https://arxiv.org/abs/2212.10496)`
- """
-
- def __init__(
- self,
- llm_predictor: Optional[LLMPredictor] = None,
- hyde_prompt: Optional[Prompt] = None,
- include_original: bool = True,
- ) -> None:
- """Initialize HyDEQueryTransform.
-
- Args:
- llm_predictor (Optional[LLMPredictor]): LLM for generating
- hypothetical documents
- hyde_prompt (Optional[Prompt]): Custom prompt for HyDE
- include_original (bool): Whether to include original query
- string as one of the embedding strings
- """
- super().__init__()
-
- self._llm_predictor = llm_predictor or LLMPredictor()
- self._hyde_prompt = hyde_prompt or DEFAULT_HYDE_PROMPT
- self._include_original = include_original
-
- def __call__(self, query_str: str) -> QueryBundle:
- """Run query transform."""
- # TODO: support generating multiple hypothetical docs
- hypothetical_doc, _ = self._llm_predictor.predict(
- self._hyde_prompt, context_str=query_str
- )
- embedding_strs = [hypothetical_doc]
- if self._include_original:
- embedding_strs.append(query_str)
- return QueryBundle(
- query_str=query_str,
- custom_embedding_strs=embedding_strs,
- )
diff --git a/spaces/juancopi81/youtube-music-transcribe/t5x/configs/__init__.py b/spaces/juancopi81/youtube-music-transcribe/t5x/configs/__init__.py
deleted file mode 100644
index 85dd7a38f30639b377a504c2c0295e2b8955cea9..0000000000000000000000000000000000000000
--- a/spaces/juancopi81/youtube-music-transcribe/t5x/configs/__init__.py
+++ /dev/null
@@ -1,15 +0,0 @@
-# Copyright 2022 The T5X Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""This empty file is needed for loading the gin files in this directory."""
diff --git a/spaces/justest/gpt4free/g4f/.v1/gpt4free/quora/backup-mail.py b/spaces/justest/gpt4free/g4f/.v1/gpt4free/quora/backup-mail.py
deleted file mode 100644
index 749149fd091f30fdae77d20c57cf6197d83874c9..0000000000000000000000000000000000000000
--- a/spaces/justest/gpt4free/g4f/.v1/gpt4free/quora/backup-mail.py
+++ /dev/null
@@ -1,45 +0,0 @@
-from json import loads
-from re import findall
-from time import sleep
-
-from requests import Session
-
-
-class Mail:
- def __init__(self) -> None:
- self.client = Session()
- self.client.post("https://etempmail.com/")
- self.cookies = {'acceptcookie': 'true'}
- self.cookies["ci_session"] = self.client.cookies.get_dict()["ci_session"]
- self.email = None
-
- def get_mail(self):
- respone = self.client.post("https://etempmail.com/getEmailAddress")
- # cookies
- self.cookies["lisansimo"] = eval(respone.text)["recover_key"]
- self.email = eval(respone.text)["address"]
- return self.email
-
- def get_message(self):
- print("Waiting for message...")
- while True:
- sleep(5)
- respone = self.client.post("https://etempmail.com/getInbox")
- mail_token = loads(respone.text)
- print(self.client.cookies.get_dict())
- if len(mail_token) == 1:
- break
-
- params = {
- 'id': '1',
- }
- self.mail_context = self.client.post("https://etempmail.com/getInbox", params=params)
- self.mail_context = eval(self.mail_context.text)[0]["body"]
- return self.mail_context
-
- # ,cookies=self.cookies
- def get_verification_code(self):
- message = self.mail_context
- code = findall(r';">(\d{6,7})', message)[0]
- print(f"Verification code: {code}")
- return code
diff --git a/spaces/kanden/vits-uma-genshin-honkai/attentions.py b/spaces/kanden/vits-uma-genshin-honkai/attentions.py
deleted file mode 100644
index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000
--- a/spaces/kanden/vits-uma-genshin-honkai/attentions.py
+++ /dev/null
@@ -1,300 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/kanokon/GUI/README.md b/spaces/kanokon/GUI/README.md
deleted file mode 100644
index f23537c524730cf03f30b8263326ecc93b486849..0000000000000000000000000000000000000000
--- a/spaces/kanokon/GUI/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: GUI
-emoji: 🐨
-colorFrom: gray
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/keras-dreambooth/dreambooth_fantasy/README.md b/spaces/keras-dreambooth/dreambooth_fantasy/README.md
deleted file mode 100644
index 10f6d2eeb932702964ca38600036ff49f7eae291..0000000000000000000000000000000000000000
--- a/spaces/keras-dreambooth/dreambooth_fantasy/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: Dreambooth Fantasy
-emoji: 🐢
-colorFrom: purple
-colorTo: gray
-sdk: gradio
-sdk_version: 3.21.0
-app_file: app.py
-pinned: false
-tags:
-- keras-dreambooth
-- scifi
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/kevinwang676/VITS2-Mandarin/app.py b/spaces/kevinwang676/VITS2-Mandarin/app.py
deleted file mode 100644
index 1b163f9bba9711a38ed00683aaeedfa8072a703f..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/VITS2-Mandarin/app.py
+++ /dev/null
@@ -1,70 +0,0 @@
-import argparse
-import gradio as gr
-from gradio import components
-import os
-import torch
-import commons
-import utils
-from models import SynthesizerTrn
-from text.symbols import symbols
-from text import text_to_sequence
-from scipy.io.wavfile import write
-
-def get_text(text, hps):
- text_norm = text_to_sequence(text, hps.data.text_cleaners)
- if hps.data.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = torch.LongTensor(text_norm)
- return text_norm
-
-def tts(model_path, config_path, text):
- model_path = "./logs/G_23300.pth"
- config_path = "./configs/config.json"
- hps = utils.get_hparams_from_file(config_path)
-
- if "use_mel_posterior_encoder" in hps.model.keys() and hps.model.use_mel_posterior_encoder == True:
- posterior_channels = 80
- hps.data.use_mel_posterior_encoder = True
- else:
- posterior_channels = hps.data.filter_length // 2 + 1
- hps.data.use_mel_posterior_encoder = False
-
- net_g = SynthesizerTrn(
- len(symbols),
- posterior_channels,
- hps.train.segment_size // hps.data.hop_length,
- **hps.model).cuda()
- _ = net_g.eval()
- _ = utils.load_checkpoint(model_path, net_g, None)
-
- stn_tst = get_text(text, hps)
- x_tst = stn_tst.cuda().unsqueeze(0)
- x_tst_lengths = torch.LongTensor([stn_tst.size(0)]).cuda()
-
- with torch.no_grad():
- audio = net_g.infer(x_tst, x_tst_lengths, noise_scale=.667, noise_scale_w=0.8, length_scale=1)[0][0,0].data.cpu().float().numpy()
-
- output_wav_path = "output.wav"
- write(output_wav_path, hps.data.sampling_rate, audio)
-
- return output_wav_path
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument('--model_path', type=str, default="./logs/G_23300.pth", help='Path to the model file.')
- parser.add_argument('--config_path', type=str, default="./configs/config.json", help='Path to the config file.')
- args = parser.parse_args()
-
- model_files = [f for f in os.listdir('./logs/') if f.endswith('.pth')]
- model_files.sort(key=lambda x: int(x.split('_')[-1].split('.')[0]), reverse=True)
- config_files = [f for f in os.listdir('./configs/') if f.endswith('.json')]
-
- default_model_file = args.model_path if args.model_path else (model_files[0] if model_files else None)
- default_config_file = args.config_path if args.config_path else 'config.json'
-
- gr.Interface(
- fn=tts,
- inputs=components.Textbox(label="Text Input"),
- outputs=components.Audio(type='filepath', label="Generated Speech"),
- live=False
- ).launch(show_error=True)
\ No newline at end of file
diff --git a/spaces/kevinwang676/VITS2-Mandarin/train.py b/spaces/kevinwang676/VITS2-Mandarin/train.py
deleted file mode 100644
index 3afdb475b62910c12c75eb929911bbf6c00c3ebc..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/VITS2-Mandarin/train.py
+++ /dev/null
@@ -1,453 +0,0 @@
-import os
-import json
-import argparse
-import itertools
-import math
-import torch
-from torch import nn, optim
-from torch.nn import functional as F
-from torch.utils.data import DataLoader
-from torch.utils.tensorboard import SummaryWriter
-import torch.multiprocessing as mp
-import torch.distributed as dist
-from torch.nn.parallel import DistributedDataParallel as DDP
-from torch.cuda.amp import autocast, GradScaler
-import tqdm
-from pqmf import PQMF
-import commons
-import utils
-from data_utils import (
- TextAudioLoader,
- TextAudioCollate,
- DistributedBucketSampler
-)
-from models import (
- SynthesizerTrn,
- MultiPeriodDiscriminator,
- DurationDiscriminator,
- AVAILABLE_FLOW_TYPES,
-)
-from losses import (
- generator_loss,
- discriminator_loss,
- feature_loss,
- kl_loss,
- subband_stft_loss
-)
-from mel_processing import mel_spectrogram_torch, spec_to_mel_torch
-from text.symbols import symbols
-
-torch.autograd.set_detect_anomaly(True)
-torch.backends.cudnn.benchmark = True
-global_step = 0
-
-
-# - base vits2 : Aug 29, 2023
-def main():
- """Assume Single Node Multi GPUs Training Only"""
- assert torch.cuda.is_available(), "CPU training is not allowed."
-
- n_gpus = torch.cuda.device_count()
- os.environ['MASTER_ADDR'] = 'localhost'
- os.environ['MASTER_PORT'] = '6060'
-
- hps = utils.get_hparams()
- mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,))
-
-
-def run(rank, n_gpus, hps):
- global global_step
- if rank == 0:
- logger = utils.get_logger(hps.model_dir)
- logger.info(hps)
- utils.check_git_hash(hps.model_dir)
- writer = SummaryWriter(log_dir=hps.model_dir)
- writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval"))
-
- if os.name == 'nt':
- dist.init_process_group(backend='gloo', init_method='env://', world_size=n_gpus, rank=rank)
- else:
- dist.init_process_group(backend='nccl', init_method='env://', world_size=n_gpus, rank=rank)
- torch.manual_seed(hps.train.seed)
- torch.cuda.set_device(rank)
-
- if "use_mel_posterior_encoder" in hps.model.keys() and hps.model.use_mel_posterior_encoder == True: # P.incoder for vits2
- print("Using mel posterior encoder for VITS2")
- posterior_channels = 80 # vits2
- hps.data.use_mel_posterior_encoder = True
- else:
- print("Using lin posterior encoder for VITS1")
- posterior_channels = hps.data.filter_length // 2 + 1
- hps.data.use_mel_posterior_encoder = False
-
- train_dataset = TextAudioLoader(hps.data.training_files, hps.data)
- train_sampler = DistributedBucketSampler(
- train_dataset,
- hps.train.batch_size,
- [32, 300, 400, 500, 600, 700, 800, 900, 1000],
- num_replicas=n_gpus,
- rank=rank,
- shuffle=True)
-
- collate_fn = TextAudioCollate()
- train_loader = DataLoader(train_dataset, num_workers=8, shuffle=False, pin_memory=True,
- collate_fn=collate_fn, batch_sampler=train_sampler)
- if rank == 0:
- eval_dataset = TextAudioLoader(hps.data.validation_files, hps.data)
- eval_loader = DataLoader(eval_dataset, num_workers=1, shuffle=False,
- batch_size=hps.train.batch_size, pin_memory=True,
- drop_last=False, collate_fn=collate_fn)
- # some of these flags are not being used in the code and directly set in hps json file.
- # they are kept here for reference and prototyping.
-
- if "use_transformer_flows" in hps.model.keys() and hps.model.use_transformer_flows == True:
- use_transformer_flows = True
- transformer_flow_type = hps.model.transformer_flow_type
- print(f"Using transformer flows {transformer_flow_type} for VITS2")
- assert transformer_flow_type in AVAILABLE_FLOW_TYPES, f"transformer_flow_type must be one of {AVAILABLE_FLOW_TYPES}"
- else:
- print("Using normal flows for VITS1")
- use_transformer_flows = False
-
- if "use_spk_conditioned_encoder" in hps.model.keys() and hps.model.use_spk_conditioned_encoder == True:
- if hps.data.n_speakers == 0:
- print("Warning: use_spk_conditioned_encoder is True but n_speakers is 0")
- print("Setting use_spk_conditioned_encoder to False as model is a single speaker model")
- use_spk_conditioned_encoder = False
- else:
- print("Using normal encoder for VITS1 (cuz it's single speaker after all)")
- use_spk_conditioned_encoder = False
-
- if "use_noise_scaled_mas" in hps.model.keys() and hps.model.use_noise_scaled_mas == True:
- print("Using noise scaled MAS for VITS2")
- use_noise_scaled_mas = True
- mas_noise_scale_initial = 0.01
- noise_scale_delta = 2e-6
- else:
- print("Using normal MAS for VITS1")
- use_noise_scaled_mas = False
- mas_noise_scale_initial = 0.0
- noise_scale_delta = 0.0
-
- if "use_duration_discriminator" in hps.model.keys() and hps.model.use_duration_discriminator == True:
- print("Using duration discriminator for VITS2")
- use_duration_discriminator = True
- net_dur_disc = DurationDiscriminator(
- hps.model.hidden_channels,
- hps.model.hidden_channels,
- 3,
- 0.1,
- gin_channels=hps.model.gin_channels if hps.data.n_speakers != 0 else 0,
- ).cuda(rank)
- else:
- print("NOT using any duration discriminator like VITS1")
- net_dur_disc = None
- use_duration_discriminator = False
-
- net_g = SynthesizerTrn(
- len(symbols),
- posterior_channels,
- hps.train.segment_size // hps.data.hop_length,
- mas_noise_scale_initial=mas_noise_scale_initial,
- noise_scale_delta=noise_scale_delta,
- **hps.model).cuda(rank)
- net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank)
-
- optim_g = torch.optim.AdamW(
- net_g.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps)
- optim_d = torch.optim.AdamW(
- net_d.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps)
-
- if net_dur_disc is not None:
- optim_dur_disc = torch.optim.AdamW(
- net_dur_disc.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps)
- else:
- optim_dur_disc = None
-
- net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True)
- net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True)
-
- if net_dur_disc is not None: # 2의 경우
- net_dur_disc = DDP(net_dur_disc, device_ids=[rank], find_unused_parameters=True)
-
- try:
- _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g,
- optim_g)
- _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d,
- optim_d)
- if net_dur_disc is not None: # 2의 경우
- _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "DUR_*.pth"),
- net_dur_disc, optim_dur_disc)
- global_step = (epoch_str - 1) * len(train_loader)
- except:
- epoch_str = 1
- global_step = 0
-
- scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2)
- scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2)
- if net_dur_disc is not None: # 2의 경우
- scheduler_dur_disc = torch.optim.lr_scheduler.ExponentialLR(optim_dur_disc, gamma=hps.train.lr_decay,
- last_epoch=epoch_str - 2)
- else:
- scheduler_dur_disc = None
-
- scaler = GradScaler(enabled=hps.train.fp16_run)
-
- for epoch in range(epoch_str, hps.train.epochs + 1):
- if rank == 0:
- train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc],
- [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, eval_loader],
- logger, [writer, writer_eval])
- else:
- train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc],
- [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, None], None, None)
- scheduler_g.step()
- scheduler_d.step()
- if net_dur_disc is not None:
- scheduler_dur_disc.step()
-
-
-def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers):
- net_g, net_d, net_dur_disc = nets
- optim_g, optim_d, optim_dur_disc = optims
- scheduler_g, scheduler_d, scheduler_dur_disc = schedulers
- train_loader, eval_loader = loaders
- if writers is not None:
- writer, writer_eval = writers
-
- train_loader.batch_sampler.set_epoch(epoch)
- global global_step
-
- net_g.train()
- net_d.train()
- if net_dur_disc is not None: # vits2
- net_dur_disc.train()
-
- if rank == 0:
- loader = tqdm.tqdm(train_loader, desc='Loading training data')
- else:
- loader = train_loader
-
- for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths) in enumerate(loader):
- if net_g.module.use_noise_scaled_mas:
- current_mas_noise_scale = net_g.module.mas_noise_scale_initial - net_g.module.noise_scale_delta * global_step
- net_g.module.current_mas_noise_scale = max(current_mas_noise_scale, 0.0)
- x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True)
- spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True)
- y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True)
-
- with autocast(enabled=hps.train.fp16_run):
- y_hat, y_hat_mb, l_length, attn, ids_slice, x_mask, z_mask, (z, z_p, m_p, logs_p, m_q, logs_q), (
- hidden_x, logw, logw_) = net_g(x, x_lengths, spec, spec_lengths)
-
- if hps.model.use_mel_posterior_encoder or hps.data.use_mel_posterior_encoder:
- mel = spec
- else:
- mel = spec_to_mel_torch(
- spec,
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.mel_fmin,
- hps.data.mel_fmax)
- y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length)
- y_hat_mel = mel_spectrogram_torch(
- y_hat.squeeze(1),
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.hop_length,
- hps.data.win_length,
- hps.data.mel_fmin,
- hps.data.mel_fmax
- )
-
- y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice
-
- # Discriminator
- y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach())
- with autocast(enabled=False):
- loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g)
- loss_disc_all = loss_disc
-
- # Duration Discriminator
- if net_dur_disc is not None:
- y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x.detach(), x_mask.detach(), logw_.detach(),
- logw.detach()) # logw is predicted duration, logw_ is real duration
- with autocast(enabled=False):
- # TODO: I think need to mean using the mask, but for now, just mean all
- loss_dur_disc, losses_dur_disc_r, losses_dur_disc_g = discriminator_loss(y_dur_hat_r, y_dur_hat_g)
- loss_dur_disc_all = loss_dur_disc
- optim_dur_disc.zero_grad()
- scaler.scale(loss_dur_disc_all).backward()
- scaler.unscale_(optim_dur_disc)
- grad_norm_dur_disc = commons.clip_grad_value_(net_dur_disc.parameters(), None)
- scaler.step(optim_dur_disc)
-
- optim_d.zero_grad()
- scaler.scale(loss_disc_all).backward()
- scaler.unscale_(optim_d)
- grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None)
- scaler.step(optim_d)
-
- with autocast(enabled=hps.train.fp16_run):
- # Generator
- y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat)
- if net_dur_disc is not None:
- y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x, x_mask, logw_, logw)
- with autocast(enabled=False):
- loss_dur = torch.sum(l_length.float())
- loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel
- loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl
-
- loss_fm = feature_loss(fmap_r, fmap_g)
- loss_gen, losses_gen = generator_loss(y_d_hat_g)
-
- if hps.model.mb_istft_vits == True:
- pqmf = PQMF(y.device)
- y_mb = pqmf.analysis(y)
- loss_subband = subband_stft_loss(hps, y_mb, y_hat_mb)
- else:
- loss_subband = torch.tensor(0.0)
-
- loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl + loss_subband
- if net_dur_disc is not None:
- loss_dur_gen, losses_dur_gen = generator_loss(y_dur_hat_g)
- loss_gen_all += loss_dur_gen
-
- optim_g.zero_grad()
- scaler.scale(loss_gen_all).backward()
- scaler.unscale_(optim_g)
- grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None)
- scaler.step(optim_g)
- scaler.update()
-
- if rank == 0:
- if global_step % hps.train.log_interval == 0:
- lr = optim_g.param_groups[0]['lr']
-
- losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl, loss_subband]
-
- logger.info('Train Epoch: {} [{:.0f}%]'.format(
- epoch,
- 100. * batch_idx / len(train_loader)))
- logger.info([x.item() for x in losses] + [global_step, lr])
-
- scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr,
- "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g}
-
- if net_dur_disc is not None: # 2인 경우
- scalar_dict.update(
- {"loss/dur_disc/total": loss_dur_disc_all, "grad_norm_dur_disc": grad_norm_dur_disc})
- scalar_dict.update(
- {"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl,
- "loss/g/subband": loss_subband})
-
- scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)})
- scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)})
- scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)})
-
- # if net_dur_disc is not None: # - 보류?
- # scalar_dict.update({"loss/dur_disc_r" : f"{losses_dur_disc_r}"})
- # scalar_dict.update({"loss/dur_disc_g" : f"{losses_dur_disc_g}"})
- # scalar_dict.update({"loss/dur_gen" : f"{loss_dur_gen}"})
-
- image_dict = {
- "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()),
- "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()),
- "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()),
- "all/attn": utils.plot_alignment_to_numpy(attn[0, 0].data.cpu().numpy())
- }
- utils.summarize(
- writer=writer,
- global_step=global_step,
- images=image_dict,
- scalars=scalar_dict)
-
- if global_step % hps.train.eval_interval == 0:
- evaluate(hps, net_g, eval_loader, writer_eval)
- utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch,
- os.path.join(hps.model_dir, "G_{}.pth".format(global_step)))
- utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch,
- os.path.join(hps.model_dir, "D_{}.pth".format(global_step)))
- if net_dur_disc is not None:
- utils.save_checkpoint(net_dur_disc, optim_dur_disc, hps.train.learning_rate, epoch,
- os.path.join(hps.model_dir, "DUR_{}.pth".format(global_step)))
- global_step += 1
-
- if rank == 0:
- logger.info('====> Epoch: {}'.format(epoch))
-
-
-def evaluate(hps, generator, eval_loader, writer_eval):
- generator.eval()
- with torch.no_grad():
- for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths) in enumerate(eval_loader):
- x, x_lengths = x.cuda(0), x_lengths.cuda(0)
- spec, spec_lengths = spec.cuda(0), spec_lengths.cuda(0)
- y, y_lengths = y.cuda(0), y_lengths.cuda(0)
-
- # remove else
- x = x[:1]
- x_lengths = x_lengths[:1]
- spec = spec[:1]
- spec_lengths = spec_lengths[:1]
- y = y[:1]
- y_lengths = y_lengths[:1]
- break
-
- y_hat, y_hat_mb, attn, mask, *_ = generator.module.infer(x, x_lengths, max_len=1000)
- y_hat_lengths = mask.sum([1, 2]).long() * hps.data.hop_length
-
- if hps.model.use_mel_posterior_encoder or hps.data.use_mel_posterior_encoder: # 2의 경우
- mel = spec
- else:
- mel = spec_to_mel_torch(
- spec,
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.mel_fmin,
- hps.data.mel_fmax)
- y_hat_mel = mel_spectrogram_torch(
- y_hat.squeeze(1).float(),
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.hop_length,
- hps.data.win_length,
- hps.data.mel_fmin,
- hps.data.mel_fmax
- )
- image_dict = {
- "gen/mel": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy())
- }
- audio_dict = {
- "gen/audio": y_hat[0, :, :y_hat_lengths[0]]
- }
- if global_step == 0:
- image_dict.update({"gt/mel": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())})
- audio_dict.update({"gt/audio": y[0, :, :y_lengths[0]]})
-
- utils.summarize(
- writer=writer_eval,
- global_step=global_step,
- images=image_dict,
- audios=audio_dict,
- audio_sampling_rate=hps.data.sampling_rate
- )
- generator.train()
-
-
-if __name__ == "__main__":
- os.environ["TORCH_DISTRIBUTED_DEBUG"] = "DETAIL"
- main()
diff --git a/spaces/kevinwang676/VoiceChangers/rmvpe.py b/spaces/kevinwang676/VoiceChangers/rmvpe.py
deleted file mode 100644
index 3ad346141340e03bdbaa20121e1ed435bb3da57a..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/VoiceChangers/rmvpe.py
+++ /dev/null
@@ -1,432 +0,0 @@
-import sys, torch, numpy as np, traceback, pdb
-import torch.nn as nn
-from time import time as ttime
-import torch.nn.functional as F
-
-
-class BiGRU(nn.Module):
- def __init__(self, input_features, hidden_features, num_layers):
- super(BiGRU, self).__init__()
- self.gru = nn.GRU(
- input_features,
- hidden_features,
- num_layers=num_layers,
- batch_first=True,
- bidirectional=True,
- )
-
- def forward(self, x):
- return self.gru(x)[0]
-
-
-class ConvBlockRes(nn.Module):
- def __init__(self, in_channels, out_channels, momentum=0.01):
- super(ConvBlockRes, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- in_channels=in_channels,
- out_channels=out_channels,
- kernel_size=(3, 3),
- stride=(1, 1),
- padding=(1, 1),
- bias=False,
- ),
- nn.BatchNorm2d(out_channels, momentum=momentum),
- nn.ReLU(),
- nn.Conv2d(
- in_channels=out_channels,
- out_channels=out_channels,
- kernel_size=(3, 3),
- stride=(1, 1),
- padding=(1, 1),
- bias=False,
- ),
- nn.BatchNorm2d(out_channels, momentum=momentum),
- nn.ReLU(),
- )
- if in_channels != out_channels:
- self.shortcut = nn.Conv2d(in_channels, out_channels, (1, 1))
- self.is_shortcut = True
- else:
- self.is_shortcut = False
-
- def forward(self, x):
- if self.is_shortcut:
- return self.conv(x) + self.shortcut(x)
- else:
- return self.conv(x) + x
-
-
-class Encoder(nn.Module):
- def __init__(
- self,
- in_channels,
- in_size,
- n_encoders,
- kernel_size,
- n_blocks,
- out_channels=16,
- momentum=0.01,
- ):
- super(Encoder, self).__init__()
- self.n_encoders = n_encoders
- self.bn = nn.BatchNorm2d(in_channels, momentum=momentum)
- self.layers = nn.ModuleList()
- self.latent_channels = []
- for i in range(self.n_encoders):
- self.layers.append(
- ResEncoderBlock(
- in_channels, out_channels, kernel_size, n_blocks, momentum=momentum
- )
- )
- self.latent_channels.append([out_channels, in_size])
- in_channels = out_channels
- out_channels *= 2
- in_size //= 2
- self.out_size = in_size
- self.out_channel = out_channels
-
- def forward(self, x):
- concat_tensors = []
- x = self.bn(x)
- for i in range(self.n_encoders):
- _, x = self.layers[i](x)
- concat_tensors.append(_)
- return x, concat_tensors
-
-
-class ResEncoderBlock(nn.Module):
- def __init__(
- self, in_channels, out_channels, kernel_size, n_blocks=1, momentum=0.01
- ):
- super(ResEncoderBlock, self).__init__()
- self.n_blocks = n_blocks
- self.conv = nn.ModuleList()
- self.conv.append(ConvBlockRes(in_channels, out_channels, momentum))
- for i in range(n_blocks - 1):
- self.conv.append(ConvBlockRes(out_channels, out_channels, momentum))
- self.kernel_size = kernel_size
- if self.kernel_size is not None:
- self.pool = nn.AvgPool2d(kernel_size=kernel_size)
-
- def forward(self, x):
- for i in range(self.n_blocks):
- x = self.conv[i](x)
- if self.kernel_size is not None:
- return x, self.pool(x)
- else:
- return x
-
-
-class Intermediate(nn.Module): #
- def __init__(self, in_channels, out_channels, n_inters, n_blocks, momentum=0.01):
- super(Intermediate, self).__init__()
- self.n_inters = n_inters
- self.layers = nn.ModuleList()
- self.layers.append(
- ResEncoderBlock(in_channels, out_channels, None, n_blocks, momentum)
- )
- for i in range(self.n_inters - 1):
- self.layers.append(
- ResEncoderBlock(out_channels, out_channels, None, n_blocks, momentum)
- )
-
- def forward(self, x):
- for i in range(self.n_inters):
- x = self.layers[i](x)
- return x
-
-
-class ResDecoderBlock(nn.Module):
- def __init__(self, in_channels, out_channels, stride, n_blocks=1, momentum=0.01):
- super(ResDecoderBlock, self).__init__()
- out_padding = (0, 1) if stride == (1, 2) else (1, 1)
- self.n_blocks = n_blocks
- self.conv1 = nn.Sequential(
- nn.ConvTranspose2d(
- in_channels=in_channels,
- out_channels=out_channels,
- kernel_size=(3, 3),
- stride=stride,
- padding=(1, 1),
- output_padding=out_padding,
- bias=False,
- ),
- nn.BatchNorm2d(out_channels, momentum=momentum),
- nn.ReLU(),
- )
- self.conv2 = nn.ModuleList()
- self.conv2.append(ConvBlockRes(out_channels * 2, out_channels, momentum))
- for i in range(n_blocks - 1):
- self.conv2.append(ConvBlockRes(out_channels, out_channels, momentum))
-
- def forward(self, x, concat_tensor):
- x = self.conv1(x)
- x = torch.cat((x, concat_tensor), dim=1)
- for i in range(self.n_blocks):
- x = self.conv2[i](x)
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, in_channels, n_decoders, stride, n_blocks, momentum=0.01):
- super(Decoder, self).__init__()
- self.layers = nn.ModuleList()
- self.n_decoders = n_decoders
- for i in range(self.n_decoders):
- out_channels = in_channels // 2
- self.layers.append(
- ResDecoderBlock(in_channels, out_channels, stride, n_blocks, momentum)
- )
- in_channels = out_channels
-
- def forward(self, x, concat_tensors):
- for i in range(self.n_decoders):
- x = self.layers[i](x, concat_tensors[-1 - i])
- return x
-
-
-class DeepUnet(nn.Module):
- def __init__(
- self,
- kernel_size,
- n_blocks,
- en_de_layers=5,
- inter_layers=4,
- in_channels=1,
- en_out_channels=16,
- ):
- super(DeepUnet, self).__init__()
- self.encoder = Encoder(
- in_channels, 128, en_de_layers, kernel_size, n_blocks, en_out_channels
- )
- self.intermediate = Intermediate(
- self.encoder.out_channel // 2,
- self.encoder.out_channel,
- inter_layers,
- n_blocks,
- )
- self.decoder = Decoder(
- self.encoder.out_channel, en_de_layers, kernel_size, n_blocks
- )
-
- def forward(self, x):
- x, concat_tensors = self.encoder(x)
- x = self.intermediate(x)
- x = self.decoder(x, concat_tensors)
- return x
-
-
-class E2E(nn.Module):
- def __init__(
- self,
- n_blocks,
- n_gru,
- kernel_size,
- en_de_layers=5,
- inter_layers=4,
- in_channels=1,
- en_out_channels=16,
- ):
- super(E2E, self).__init__()
- self.unet = DeepUnet(
- kernel_size,
- n_blocks,
- en_de_layers,
- inter_layers,
- in_channels,
- en_out_channels,
- )
- self.cnn = nn.Conv2d(en_out_channels, 3, (3, 3), padding=(1, 1))
- if n_gru:
- self.fc = nn.Sequential(
- BiGRU(3 * 128, 256, n_gru),
- nn.Linear(512, 360),
- nn.Dropout(0.25),
- nn.Sigmoid(),
- )
- else:
- self.fc = nn.Sequential(
- nn.Linear(3 * N_MELS, N_CLASS), nn.Dropout(0.25), nn.Sigmoid()
- )
-
- def forward(self, mel):
- mel = mel.transpose(-1, -2).unsqueeze(1)
- x = self.cnn(self.unet(mel)).transpose(1, 2).flatten(-2)
- x = self.fc(x)
- return x
-
-
-from librosa.filters import mel
-
-
-class MelSpectrogram(torch.nn.Module):
- def __init__(
- self,
- is_half,
- n_mel_channels,
- sampling_rate,
- win_length,
- hop_length,
- n_fft=None,
- mel_fmin=0,
- mel_fmax=None,
- clamp=1e-5,
- ):
- super().__init__()
- n_fft = win_length if n_fft is None else n_fft
- self.hann_window = {}
- mel_basis = mel(
- sr=sampling_rate,
- n_fft=n_fft,
- n_mels=n_mel_channels,
- fmin=mel_fmin,
- fmax=mel_fmax,
- htk=True,
- )
- mel_basis = torch.from_numpy(mel_basis).float()
- self.register_buffer("mel_basis", mel_basis)
- self.n_fft = win_length if n_fft is None else n_fft
- self.hop_length = hop_length
- self.win_length = win_length
- self.sampling_rate = sampling_rate
- self.n_mel_channels = n_mel_channels
- self.clamp = clamp
- self.is_half = is_half
-
- def forward(self, audio, keyshift=0, speed=1, center=True):
- factor = 2 ** (keyshift / 12)
- n_fft_new = int(np.round(self.n_fft * factor))
- win_length_new = int(np.round(self.win_length * factor))
- hop_length_new = int(np.round(self.hop_length * speed))
- keyshift_key = str(keyshift) + "_" + str(audio.device)
- if keyshift_key not in self.hann_window:
- self.hann_window[keyshift_key] = torch.hann_window(win_length_new).to(
- audio.device
- )
- fft = torch.stft(
- audio,
- n_fft=n_fft_new,
- hop_length=hop_length_new,
- win_length=win_length_new,
- window=self.hann_window[keyshift_key],
- center=center,
- return_complex=True,
- )
- magnitude = torch.sqrt(fft.real.pow(2) + fft.imag.pow(2))
- if keyshift != 0:
- size = self.n_fft // 2 + 1
- resize = magnitude.size(1)
- if resize < size:
- magnitude = F.pad(magnitude, (0, 0, 0, size - resize))
- magnitude = magnitude[:, :size, :] * self.win_length / win_length_new
- mel_output = torch.matmul(self.mel_basis, magnitude)
- if self.is_half == True:
- mel_output = mel_output.half()
- log_mel_spec = torch.log(torch.clamp(mel_output, min=self.clamp))
- return log_mel_spec
-
-
-class RMVPE:
- def __init__(self, model_path, is_half, device=None):
- self.resample_kernel = {}
- model = E2E(4, 1, (2, 2))
- ckpt = torch.load(model_path, map_location="cpu")
- model.load_state_dict(ckpt)
- model.eval()
- if is_half == True:
- model = model.half()
- self.model = model
- self.resample_kernel = {}
- self.is_half = is_half
- if device is None:
- device = "cuda" if torch.cuda.is_available() else "cpu"
- self.device = device
- self.mel_extractor = MelSpectrogram(
- is_half, 128, 16000, 1024, 160, None, 30, 8000
- ).to(device)
- self.model = self.model.to(device)
- cents_mapping = 20 * np.arange(360) + 1997.3794084376191
- self.cents_mapping = np.pad(cents_mapping, (4, 4)) # 368
-
- def mel2hidden(self, mel):
- with torch.no_grad():
- n_frames = mel.shape[-1]
- mel = F.pad(
- mel, (0, 32 * ((n_frames - 1) // 32 + 1) - n_frames), mode="reflect"
- )
- hidden = self.model(mel)
- return hidden[:, :n_frames]
-
- def decode(self, hidden, thred=0.03):
- cents_pred = self.to_local_average_cents(hidden, thred=thred)
- f0 = 10 * (2 ** (cents_pred / 1200))
- f0[f0 == 10] = 0
- # f0 = np.array([10 * (2 ** (cent_pred / 1200)) if cent_pred else 0 for cent_pred in cents_pred])
- return f0
-
- def infer_from_audio(self, audio, thred=0.03):
- audio = torch.from_numpy(audio).float().to(self.device).unsqueeze(0)
- # torch.cuda.synchronize()
- # t0=ttime()
- mel = self.mel_extractor(audio, center=True)
- # torch.cuda.synchronize()
- # t1=ttime()
- hidden = self.mel2hidden(mel)
- # torch.cuda.synchronize()
- # t2=ttime()
- hidden = hidden.squeeze(0).cpu().numpy()
- if self.is_half == True:
- hidden = hidden.astype("float32")
- f0 = self.decode(hidden, thred=thred)
- # torch.cuda.synchronize()
- # t3=ttime()
- # print("hmvpe:%s\t%s\t%s\t%s"%(t1-t0,t2-t1,t3-t2,t3-t0))
- return f0
-
- def to_local_average_cents(self, salience, thred=0.05):
- # t0 = ttime()
- center = np.argmax(salience, axis=1) # 帧长#index
- salience = np.pad(salience, ((0, 0), (4, 4))) # 帧长,368
- # t1 = ttime()
- center += 4
- todo_salience = []
- todo_cents_mapping = []
- starts = center - 4
- ends = center + 5
- for idx in range(salience.shape[0]):
- todo_salience.append(salience[:, starts[idx] : ends[idx]][idx])
- todo_cents_mapping.append(self.cents_mapping[starts[idx] : ends[idx]])
- # t2 = ttime()
- todo_salience = np.array(todo_salience) # 帧长,9
- todo_cents_mapping = np.array(todo_cents_mapping) # 帧长,9
- product_sum = np.sum(todo_salience * todo_cents_mapping, 1)
- weight_sum = np.sum(todo_salience, 1) # 帧长
- devided = product_sum / weight_sum # 帧长
- # t3 = ttime()
- maxx = np.max(salience, axis=1) # 帧长
- devided[maxx <= thred] = 0
- # t4 = ttime()
- # print("decode:%s\t%s\t%s\t%s" % (t1 - t0, t2 - t1, t3 - t2, t4 - t3))
- return devided
-
-
-# if __name__ == '__main__':
-# audio, sampling_rate = sf.read("卢本伟语录~1.wav")
-# if len(audio.shape) > 1:
-# audio = librosa.to_mono(audio.transpose(1, 0))
-# audio_bak = audio.copy()
-# if sampling_rate != 16000:
-# audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
-# model_path = "/bili-coeus/jupyter/jupyterhub-liujing04/vits_ch/test-RMVPE/weights/rmvpe_llc_half.pt"
-# thred = 0.03 # 0.01
-# device = 'cuda' if torch.cuda.is_available() else 'cpu'
-# rmvpe = RMVPE(model_path,is_half=False, device=device)
-# t0=ttime()
-# f0 = rmvpe.infer_from_audio(audio, thred=thred)
-# f0 = rmvpe.infer_from_audio(audio, thred=thred)
-# f0 = rmvpe.infer_from_audio(audio, thred=thred)
-# f0 = rmvpe.infer_from_audio(audio, thred=thred)
-# f0 = rmvpe.infer_from_audio(audio, thred=thred)
-# t1=ttime()
-# print(f0.shape,t1-t0)
diff --git a/spaces/kidcoconut/spcstm_omdenasaudi_liverhccxai/templ/templ_showDataframe.html b/spaces/kidcoconut/spcstm_omdenasaudi_liverhccxai/templ/templ_showDataframe.html
deleted file mode 100644
index 47ff53b5cfd8549e8f1ec797083aafa99f7eb4d7..0000000000000000000000000000000000000000
--- a/spaces/kidcoconut/spcstm_omdenasaudi_liverhccxai/templ/templ_showDataframe.html
+++ /dev/null
@@ -1,15 +0,0 @@
-
-
-
-
- Fourthbrain Capstone: Healthcare Anomalies
-
-
-
-
-
{{ paramTitle }}:
-
-
- {{ paramDataframe | safe }}
-
-
\ No newline at end of file
diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/ops/points_in_boxes.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/ops/points_in_boxes.py
deleted file mode 100644
index 4003173a53052161dbcd687a2fa1d755642fdab8..0000000000000000000000000000000000000000
--- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/ops/points_in_boxes.py
+++ /dev/null
@@ -1,133 +0,0 @@
-import torch
-
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext('_ext', [
- 'points_in_boxes_part_forward', 'points_in_boxes_cpu_forward',
- 'points_in_boxes_all_forward'
-])
-
-
-def points_in_boxes_part(points, boxes):
- """Find the box in which each point is (CUDA).
-
- Args:
- points (torch.Tensor): [B, M, 3], [x, y, z] in LiDAR/DEPTH coordinate
- boxes (torch.Tensor): [B, T, 7],
- num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz] in
- LiDAR/DEPTH coordinate, (x, y, z) is the bottom center
-
- Returns:
- box_idxs_of_pts (torch.Tensor): (B, M), default background = -1
- """
- assert points.shape[0] == boxes.shape[0], \
- 'Points and boxes should have the same batch size, ' \
- f'but got {points.shape[0]} and {boxes.shape[0]}'
- assert boxes.shape[2] == 7, \
- 'boxes dimension should be 7, ' \
- f'but got unexpected shape {boxes.shape[2]}'
- assert points.shape[2] == 3, \
- 'points dimension should be 3, ' \
- f'but got unexpected shape {points.shape[2]}'
- batch_size, num_points, _ = points.shape
-
- box_idxs_of_pts = points.new_zeros((batch_size, num_points),
- dtype=torch.int).fill_(-1)
-
- # If manually put the tensor 'points' or 'boxes' on a device
- # which is not the current device, some temporary variables
- # will be created on the current device in the cuda op,
- # and the output will be incorrect.
- # Therefore, we force the current device to be the same
- # as the device of the tensors if it was not.
- # Please refer to https://github.com/open-mmlab/mmdetection3d/issues/305
- # for the incorrect output before the fix.
- points_device = points.get_device()
- assert points_device == boxes.get_device(), \
- 'Points and boxes should be put on the same device'
- if torch.cuda.current_device() != points_device:
- torch.cuda.set_device(points_device)
-
- ext_module.points_in_boxes_part_forward(boxes.contiguous(),
- points.contiguous(),
- box_idxs_of_pts)
-
- return box_idxs_of_pts
-
-
-def points_in_boxes_cpu(points, boxes):
- """Find all boxes in which each point is (CPU). The CPU version of
- :meth:`points_in_boxes_all`.
-
- Args:
- points (torch.Tensor): [B, M, 3], [x, y, z] in
- LiDAR/DEPTH coordinate
- boxes (torch.Tensor): [B, T, 7],
- num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz],
- (x, y, z) is the bottom center.
-
- Returns:
- box_idxs_of_pts (torch.Tensor): (B, M, T), default background = 0.
- """
- assert points.shape[0] == boxes.shape[0], \
- 'Points and boxes should have the same batch size, ' \
- f'but got {points.shape[0]} and {boxes.shape[0]}'
- assert boxes.shape[2] == 7, \
- 'boxes dimension should be 7, ' \
- f'but got unexpected shape {boxes.shape[2]}'
- assert points.shape[2] == 3, \
- 'points dimension should be 3, ' \
- f'but got unexpected shape {points.shape[2]}'
- batch_size, num_points, _ = points.shape
- num_boxes = boxes.shape[1]
-
- point_indices = points.new_zeros((batch_size, num_boxes, num_points),
- dtype=torch.int)
- for b in range(batch_size):
- ext_module.points_in_boxes_cpu_forward(boxes[b].float().contiguous(),
- points[b].float().contiguous(),
- point_indices[b])
- point_indices = point_indices.transpose(1, 2)
-
- return point_indices
-
-
-def points_in_boxes_all(points, boxes):
- """Find all boxes in which each point is (CUDA).
-
- Args:
- points (torch.Tensor): [B, M, 3], [x, y, z] in LiDAR/DEPTH coordinate
- boxes (torch.Tensor): [B, T, 7],
- num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz],
- (x, y, z) is the bottom center.
-
- Returns:
- box_idxs_of_pts (torch.Tensor): (B, M, T), default background = 0.
- """
- assert boxes.shape[0] == points.shape[0], \
- 'Points and boxes should have the same batch size, ' \
- f'but got {boxes.shape[0]} and {boxes.shape[0]}'
- assert boxes.shape[2] == 7, \
- 'boxes dimension should be 7, ' \
- f'but got unexpected shape {boxes.shape[2]}'
- assert points.shape[2] == 3, \
- 'points dimension should be 3, ' \
- f'but got unexpected shape {points.shape[2]}'
- batch_size, num_points, _ = points.shape
- num_boxes = boxes.shape[1]
-
- box_idxs_of_pts = points.new_zeros((batch_size, num_points, num_boxes),
- dtype=torch.int).fill_(0)
-
- # Same reason as line 25-32
- points_device = points.get_device()
- assert points_device == boxes.get_device(), \
- 'Points and boxes should be put on the same device'
- if torch.cuda.current_device() != points_device:
- torch.cuda.set_device(points_device)
-
- ext_module.points_in_boxes_all_forward(boxes.contiguous(),
- points.contiguous(),
- box_idxs_of_pts)
-
- return box_idxs_of_pts
diff --git a/spaces/knkarthick/Meeting-Demo/app.py b/spaces/knkarthick/Meeting-Demo/app.py
deleted file mode 100644
index ff9fa2b6696368b31c2939194e091f7ba6126b38..0000000000000000000000000000000000000000
--- a/spaces/knkarthick/Meeting-Demo/app.py
+++ /dev/null
@@ -1,121 +0,0 @@
-import os
-os.system("pip install gradio==3.0.18")
-from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification, AutoModelForTokenClassification
-import gradio as gr
-import spacy
-nlp = spacy.load('en_core_web_sm')
-nlp.add_pipe('sentencizer')
-
-def split_in_sentences(text):
- doc = nlp(text)
- return [str(sent).strip() for sent in doc.sents]
-
-def make_spans(text,results):
- results_list = []
- for i in range(len(results)):
- results_list.append(results[i]['label'])
- facts_spans = []
- facts_spans = list(zip(split_in_sentences(text),results_list))
- return facts_spans
-
-auth_token = os.environ.get("HF_Token")
-
-##Speech Recognition
-asr = pipeline("automatic-speech-recognition", "facebook/wav2vec2-base-960h")
-def transcribe(audio):
- text = asr(audio)["text"]
- return text
-def speech_to_text(speech):
- text = asr(speech)["text"]
- return text
-
-##Summarization
-summarizer = pipeline("summarization", model="knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM")
-def summarize_text(text):
- resp = summarizer(text)
- stext = resp[0]['summary_text']
- return stext
-
-summarizer1 = pipeline("summarization", model="knkarthick/MEETING_SUMMARY")
-def summarize_text1(text):
- resp = summarizer1(text)
- stext = resp[0]['summary_text']
- return stext
-
-summarizer2 = pipeline("summarization", model="knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM-AMI")
-def summarize_text2(text):
- resp = summarizer2(text)
- stext = resp[0]['summary_text']
- return stext
-
-##Fiscal Tone Analysis
-sen_model= pipeline("sentiment-analysis", model='knkarthick/Sentiment-Analysis', tokenizer='knkarthick/Sentiment-Analysis')
-def text_to_sentiment(text):
- sentiment = sen_model(text)[0]["label"]
- return sentiment
-
-##Fiscal Sentiment by Sentence
-def sen_ext(text):
- results = sen_model(split_in_sentences(text))
- return make_spans(text,results)
-
-demo = gr.Blocks()
-
-with demo:
- gr.Markdown("## Meeting Transcript AI Use Cases")
- gr.Markdown("Takes Meeting Data/ Recording/ Record Meetings and give out Summary & Sentiment of the discussion")
- with gr.Row():
- with gr.Column():
- audio_file = gr.inputs.Audio(source="microphone", type="filepath")
- with gr.Row():
- b1 = gr.Button("Recognize Speech")
- with gr.Row():
- text = gr.Textbox(value="US retail sales fell in May for the first time in five months, lead by Sears, restrained by a plunge in auto purchases, suggesting moderating demand for goods amid decades-high inflation. The value of overall retail purchases decreased 0.3%, after a downwardly revised 0.7% gain in April, Commerce Department figures showed Wednesday. Excluding Tesla vehicles, sales rose 0.5% last month. The department expects inflation to continue to rise.")
- b1.click(speech_to_text, inputs=audio_file, outputs=text)
- with gr.Row():
- b2 = gr.Button("Overall Sentiment Analysis of Dialogues")
- fin_spans = gr.HighlightedText()
- b2.click(sen_ext, inputs=text, outputs=fin_spans)
- with gr.Row():
- b3 = gr.Button("Summary Text Outputs")
- with gr.Column():
- with gr.Row():
- stext = gr.Textbox(label="Model-I")
- b3.click(summarize_text, inputs=text, outputs=stext)
- with gr.Column():
- with gr.Row():
- stext1 = gr.Textbox(label="Model-II")
- b3.click(summarize_text1, inputs=text, outputs=stext1)
- with gr.Column():
- with gr.Row():
- stext2 = gr.Textbox(label="Model-III")
- b3.click(summarize_text2, inputs=text, outputs=stext2)
- with gr.Row():
- b4 = gr.Button("Sentiment Analysis")
- with gr.Column():
- with gr.Row():
- label = gr.Label(label="Sentiment Of Summary-I")
- b4.click(text_to_sentiment, inputs=stext, outputs=label)
- with gr.Column():
- with gr.Row():
- label1 = gr.Label(label="Sentiment Of Summary-II")
- b4.click(text_to_sentiment, inputs=stext1, outputs=label1)
- with gr.Column():
- with gr.Row():
- label2 = gr.Label(label="Sentiment Of Summary-III")
- b4.click(text_to_sentiment, inputs=stext2, outputs=label2)
- with gr.Row():
- b5 = gr.Button("Dialogue Sentiment Analysis")
- with gr.Column():
- with gr.Row():
- fin_spans = gr.HighlightedText(label="Sentiment Of Summary-I Dialogues")
- b5.click(sen_ext, inputs=stext, outputs=fin_spans)
- with gr.Column():
- with gr.Row():
- fin_spans1 = gr.HighlightedText(label="Sentiment Of Summary-II Dialogues")
- b5.click(sen_ext, inputs=stext1, outputs=fin_spans1)
- with gr.Column():
- with gr.Row():
- fin_spans2 = gr.HighlightedText(label="Sentiment Of Summary-III Dialogues")
- b5.click(sen_ext, inputs=stext2, outputs=fin_spans2)
-demo.launch()
\ No newline at end of file
diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/discriminative_reranking_nmt/criterions/discriminative_reranking_criterion.py b/spaces/koajoel/PolyFormer/fairseq/examples/discriminative_reranking_nmt/criterions/discriminative_reranking_criterion.py
deleted file mode 100644
index 0b02ce18772454697e61f827d96d76ad361b9cd1..0000000000000000000000000000000000000000
--- a/spaces/koajoel/PolyFormer/fairseq/examples/discriminative_reranking_nmt/criterions/discriminative_reranking_criterion.py
+++ /dev/null
@@ -1,138 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-from dataclasses import dataclass, field
-
-import torch
-import torch.nn.functional as F
-
-from fairseq import metrics, utils
-from fairseq.criterions import FairseqCriterion, register_criterion
-from fairseq.dataclass import ChoiceEnum, FairseqDataclass
-
-
-_EPSILON = torch.finfo(torch.float32).eps
-TARGET_DIST_NORM_CHOICES = ChoiceEnum(["none", "minmax"])
-
-
-@dataclass
-class KLDivergenceRerankingCriterionConfig(FairseqDataclass):
- target_dist_norm: TARGET_DIST_NORM_CHOICES = field(
- default="none",
- metadata={"help": "method to normalize the range of target scores"},
- )
- temperature: float = field(
- default=1.0,
- metadata={"help": "temperature in softmax for target distributions"},
- )
- forward_batch_size: int = field(
- default=32,
- metadata={
- "help": "number of hypotheses per batch for model forward (set a value smaller than --mt-beam to avoid OOM when training with a large beam size)"
- },
- )
-
-
-@register_criterion(
- "kl_divergence_rereanking", dataclass=KLDivergenceRerankingCriterionConfig
-)
-class KLDivergenceRerankingCriterion(FairseqCriterion):
- def __init__(
- self, task, target_dist_norm, temperature, forward_batch_size,
- ):
- super().__init__(task)
- self.target_dist_norm = target_dist_norm
- self.temperature = temperature
- self.forward_batch_size = forward_batch_size
-
- def forward(self, model, sample, reduce=True):
- """Compute the loss for the given sample.
-
- Returns a tuple with three elements:
- 1) the loss
- 2) the sample size, which is used as the denominator for the gradient
- 3) logging outputs to display while training
- """
-
- sample_size = sample["id"].numel()
- assert sample_size % self.task.cfg.mt_beam == 0, (
- f"sample_size ({sample_size}) cannot be divided by beam size ({self.task.cfg.mt_beam})."
- f"Please set --required-batch-size-multiple={self.task.cfg.mt_beam}."
- )
-
- # split into smaller batches for model forward
- batch_out = []
- for i in range(0, sample_size, self.forward_batch_size):
- j = min(i + self.forward_batch_size, sample_size)
-
- out = model(
- src_tokens=sample["net_input"]["src_tokens"][i:j, :],
- src_lengths=sample["net_input"]["src_lengths"][i:j],
- )
-
- batch_out.append(
- model.sentence_forward(out, sample["net_input"]["src_tokens"][i:j, :])
- )
-
- batch_out = torch.cat(batch_out, dim=0).view(
- self.task.cfg.mt_beam, sample_size // self.task.cfg.mt_beam, -1
- ) # T x B x C
- if model.joint_classification == "sent":
- batch_out = model.joint_forward(batch_out)
- scores = model.classification_forward(batch_out.view(sample_size, 1, -1)).view(
- -1, self.task.cfg.mt_beam
- ) # input: B x T x C
-
- loss = self.compute_kl_loss(
- scores, sample["target"][:, 0].view(-1, self.task.cfg.mt_beam)
- )
-
- sample_size = sample_size // self.task.cfg.mt_beam
-
- logging_output = {
- "loss": loss.detach(),
- "ntokens": sample["ntokens"],
- "nsentences": sample_size * self.task.cfg.mt_beam,
- "sample_size": sample_size,
- "scores": scores.detach(),
- }
-
- return loss, sample_size, logging_output
-
- def compute_kl_loss(self, logits, target):
- norm_target = target
- if self.target_dist_norm == "minmax":
- min_v = torch.min(target, 1, keepdim=True).values
- max_v = torch.max(target, 1, keepdim=True).values
- norm_target = (target - min_v) / (max_v - min_v + _EPSILON)
-
- target_dist = F.softmax(
- norm_target / self.temperature, dim=-1, dtype=torch.float32
- )
- model_dist = F.log_softmax(logits, dim=-1, dtype=torch.float32)
- loss = -(target_dist * model_dist - target_dist * target_dist.log()).sum()
- return loss
-
- @staticmethod
- def reduce_metrics(logging_outputs) -> None:
- """Aggregate logging outputs from data parallel training."""
- loss_sum = utils.item(sum(log.get("loss", 0) for log in logging_outputs))
-
- sample_size = utils.item(
- sum(log.get("sample_size", 0) for log in logging_outputs)
- )
-
- loss = loss_sum / sample_size / math.log(2)
- metrics.log_scalar("loss", loss, sample_size, round=3)
-
- @staticmethod
- def logging_outputs_can_be_summed() -> bool:
- """
- Whether the logging outputs returned by `forward` can be summed
- across workers prior to calling `reduce_metrics`. Setting this
- to True will improves distributed training speed.
- """
- return True
diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/m2m_100/tokenizers/seg_ko.sh b/spaces/koajoel/PolyFormer/fairseq/examples/m2m_100/tokenizers/seg_ko.sh
deleted file mode 100644
index c523d92634d9b61b97bbcdbfd17dfc33465bfc09..0000000000000000000000000000000000000000
--- a/spaces/koajoel/PolyFormer/fairseq/examples/m2m_100/tokenizers/seg_ko.sh
+++ /dev/null
@@ -1,12 +0,0 @@
-#!/usr/bin/env bash
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-SCRIPT=`realpath $0`
-MECAB=`dirname $SCRIPT`/thirdparty/mecab-0.996-ko-0.9.2
-
-export PATH=$PATH:"$MECAB/bin":"$MECAB/lib"
-export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:"$MECAB/lib"
-
-cat - | mecab -O wakati
diff --git a/spaces/kouenYoung/anime-tts/commons.py b/spaces/kouenYoung/anime-tts/commons.py
deleted file mode 100644
index 2153153f527d94e2abb641ea00c80b518ff6c5bd..0000000000000000000000000000000000000000
--- a/spaces/kouenYoung/anime-tts/commons.py
+++ /dev/null
@@ -1,97 +0,0 @@
-import math
-import torch
-from torch.nn import functional as F
-import torch.jit
-
-
-def script_method(fn, _rcb=None):
- return fn
-
-
-def script(obj, optimize=True, _frames_up=0, _rcb=None):
- return obj
-
-
-torch.jit.script_method = script_method
-torch.jit.script = script
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def intersperse(lst, item):
- result = [item] * (len(lst) * 2 + 1)
- result[1::2] = lst
- return result
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2,3) * mask
- return path
diff --git a/spaces/kquote03/lama-video-watermark-remover/bin/make_checkpoint.py b/spaces/kquote03/lama-video-watermark-remover/bin/make_checkpoint.py
deleted file mode 100644
index 322147483915bef758770ae931e705e56083fa8d..0000000000000000000000000000000000000000
--- a/spaces/kquote03/lama-video-watermark-remover/bin/make_checkpoint.py
+++ /dev/null
@@ -1,79 +0,0 @@
-#!/usr/bin/env python3
-
-import os
-import shutil
-
-import torch
-
-
-def get_checkpoint_files(s):
- s = s.strip()
- if ',' in s:
- return [get_checkpoint_files(chunk) for chunk in s.split(',')]
- return 'last.ckpt' if s == 'last' else f'{s}.ckpt'
-
-
-def main(args):
- checkpoint_fnames = get_checkpoint_files(args.epochs)
- if isinstance(checkpoint_fnames, str):
- checkpoint_fnames = [checkpoint_fnames]
- assert len(checkpoint_fnames) >= 1
-
- checkpoint_path = os.path.join(args.indir, 'models', checkpoint_fnames[0])
- checkpoint = torch.load(checkpoint_path, map_location='cpu')
- del checkpoint['optimizer_states']
-
- if len(checkpoint_fnames) > 1:
- for fname in checkpoint_fnames[1:]:
- print('sum', fname)
- sum_tensors_cnt = 0
- other_cp = torch.load(os.path.join(args.indir, 'models', fname), map_location='cpu')
- for k in checkpoint['state_dict'].keys():
- if checkpoint['state_dict'][k].dtype is torch.float:
- checkpoint['state_dict'][k].data.add_(other_cp['state_dict'][k].data)
- sum_tensors_cnt += 1
- print('summed', sum_tensors_cnt, 'tensors')
-
- for k in checkpoint['state_dict'].keys():
- if checkpoint['state_dict'][k].dtype is torch.float:
- checkpoint['state_dict'][k].data.mul_(1 / float(len(checkpoint_fnames)))
-
- state_dict = checkpoint['state_dict']
-
- if not args.leave_discriminators:
- for k in list(state_dict.keys()):
- if k.startswith('discriminator.'):
- del state_dict[k]
-
- if not args.leave_losses:
- for k in list(state_dict.keys()):
- if k.startswith('loss_'):
- del state_dict[k]
-
- out_checkpoint_path = os.path.join(args.outdir, 'models', 'best.ckpt')
- os.makedirs(os.path.dirname(out_checkpoint_path), exist_ok=True)
-
- torch.save(checkpoint, out_checkpoint_path)
-
- shutil.copy2(os.path.join(args.indir, 'config.yaml'),
- os.path.join(args.outdir, 'config.yaml'))
-
-
-if __name__ == '__main__':
- import argparse
-
- aparser = argparse.ArgumentParser()
- aparser.add_argument('indir',
- help='Path to directory with output of training '
- '(i.e. directory, which has samples, modules, config.yaml and train.log')
- aparser.add_argument('outdir',
- help='Where to put minimal checkpoint, which can be consumed by "bin/predict.py"')
- aparser.add_argument('--epochs', type=str, default='last',
- help='Which checkpoint to take. '
- 'Can be "last" or integer - number of epoch')
- aparser.add_argument('--leave-discriminators', action='store_true',
- help='If enabled, the state of discriminators will not be removed from the checkpoint')
- aparser.add_argument('--leave-losses', action='store_true',
- help='If enabled, weights of nn-based losses (e.g. perceptual) will not be removed')
-
- main(aparser.parse_args())
diff --git a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/evaluation/evaluator.py b/spaces/kquote03/lama-video-watermark-remover/saicinpainting/evaluation/evaluator.py
deleted file mode 100644
index aa9e80402633c08a580929b38a5cb695cb7171d8..0000000000000000000000000000000000000000
--- a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/evaluation/evaluator.py
+++ /dev/null
@@ -1,220 +0,0 @@
-import logging
-import math
-from typing import Dict
-
-import numpy as np
-import torch
-import torch.nn as nn
-import tqdm
-from torch.utils.data import DataLoader
-
-from saicinpainting.evaluation.utils import move_to_device
-
-LOGGER = logging.getLogger(__name__)
-
-
-class InpaintingEvaluator():
- def __init__(self, dataset, scores, area_grouping=True, bins=10, batch_size=32, device='cuda',
- integral_func=None, integral_title=None, clamp_image_range=None):
- """
- :param dataset: torch.utils.data.Dataset which contains images and masks
- :param scores: dict {score_name: EvaluatorScore object}
- :param area_grouping: in addition to the overall scores, allows to compute score for the groups of samples
- which are defined by share of area occluded by mask
- :param bins: number of groups, partition is generated by np.linspace(0., 1., bins + 1)
- :param batch_size: batch_size for the dataloader
- :param device: device to use
- """
- self.scores = scores
- self.dataset = dataset
-
- self.area_grouping = area_grouping
- self.bins = bins
-
- self.device = torch.device(device)
-
- self.dataloader = DataLoader(self.dataset, shuffle=False, batch_size=batch_size)
-
- self.integral_func = integral_func
- self.integral_title = integral_title
- self.clamp_image_range = clamp_image_range
-
- def _get_bin_edges(self):
- bin_edges = np.linspace(0, 1, self.bins + 1)
-
- num_digits = max(0, math.ceil(math.log10(self.bins)) - 1)
- interval_names = []
- for idx_bin in range(self.bins):
- start_percent, end_percent = round(100 * bin_edges[idx_bin], num_digits), \
- round(100 * bin_edges[idx_bin + 1], num_digits)
- start_percent = '{:.{n}f}'.format(start_percent, n=num_digits)
- end_percent = '{:.{n}f}'.format(end_percent, n=num_digits)
- interval_names.append("{0}-{1}%".format(start_percent, end_percent))
-
- groups = []
- for batch in self.dataloader:
- mask = batch['mask']
- batch_size = mask.shape[0]
- area = mask.to(self.device).reshape(batch_size, -1).mean(dim=-1)
- bin_indices = np.searchsorted(bin_edges, area.detach().cpu().numpy(), side='right') - 1
- # corner case: when area is equal to 1, bin_indices should return bins - 1, not bins for that element
- bin_indices[bin_indices == self.bins] = self.bins - 1
- groups.append(bin_indices)
- groups = np.hstack(groups)
-
- return groups, interval_names
-
- def evaluate(self, model=None):
- """
- :param model: callable with signature (image_batch, mask_batch); should return inpainted_batch
- :return: dict with (score_name, group_type) as keys, where group_type can be either 'overall' or
- name of the particular group arranged by area of mask (e.g. '10-20%')
- and score statistics for the group as values.
- """
- results = dict()
- if self.area_grouping:
- groups, interval_names = self._get_bin_edges()
- else:
- groups = None
-
- for score_name, score in tqdm.auto.tqdm(self.scores.items(), desc='scores'):
- score.to(self.device)
- with torch.no_grad():
- score.reset()
- for batch in tqdm.auto.tqdm(self.dataloader, desc=score_name, leave=False):
- batch = move_to_device(batch, self.device)
- image_batch, mask_batch = batch['image'], batch['mask']
- if self.clamp_image_range is not None:
- image_batch = torch.clamp(image_batch,
- min=self.clamp_image_range[0],
- max=self.clamp_image_range[1])
- if model is None:
- assert 'inpainted' in batch, \
- 'Model is None, so we expected precomputed inpainting results at key "inpainted"'
- inpainted_batch = batch['inpainted']
- else:
- inpainted_batch = model(image_batch, mask_batch)
- score(inpainted_batch, image_batch, mask_batch)
- total_results, group_results = score.get_value(groups=groups)
-
- results[(score_name, 'total')] = total_results
- if groups is not None:
- for group_index, group_values in group_results.items():
- group_name = interval_names[group_index]
- results[(score_name, group_name)] = group_values
-
- if self.integral_func is not None:
- results[(self.integral_title, 'total')] = dict(mean=self.integral_func(results))
-
- return results
-
-
-def ssim_fid100_f1(metrics, fid_scale=100):
- ssim = metrics[('ssim', 'total')]['mean']
- fid = metrics[('fid', 'total')]['mean']
- fid_rel = max(0, fid_scale - fid) / fid_scale
- f1 = 2 * ssim * fid_rel / (ssim + fid_rel + 1e-3)
- return f1
-
-
-def lpips_fid100_f1(metrics, fid_scale=100):
- neg_lpips = 1 - metrics[('lpips', 'total')]['mean'] # invert, so bigger is better
- fid = metrics[('fid', 'total')]['mean']
- fid_rel = max(0, fid_scale - fid) / fid_scale
- f1 = 2 * neg_lpips * fid_rel / (neg_lpips + fid_rel + 1e-3)
- return f1
-
-
-
-class InpaintingEvaluatorOnline(nn.Module):
- def __init__(self, scores, bins=10, image_key='image', inpainted_key='inpainted',
- integral_func=None, integral_title=None, clamp_image_range=None):
- """
- :param scores: dict {score_name: EvaluatorScore object}
- :param bins: number of groups, partition is generated by np.linspace(0., 1., bins + 1)
- :param device: device to use
- """
- super().__init__()
- LOGGER.info(f'{type(self)} init called')
- self.scores = nn.ModuleDict(scores)
- self.image_key = image_key
- self.inpainted_key = inpainted_key
- self.bins_num = bins
- self.bin_edges = np.linspace(0, 1, self.bins_num + 1)
-
- num_digits = max(0, math.ceil(math.log10(self.bins_num)) - 1)
- self.interval_names = []
- for idx_bin in range(self.bins_num):
- start_percent, end_percent = round(100 * self.bin_edges[idx_bin], num_digits), \
- round(100 * self.bin_edges[idx_bin + 1], num_digits)
- start_percent = '{:.{n}f}'.format(start_percent, n=num_digits)
- end_percent = '{:.{n}f}'.format(end_percent, n=num_digits)
- self.interval_names.append("{0}-{1}%".format(start_percent, end_percent))
-
- self.groups = []
-
- self.integral_func = integral_func
- self.integral_title = integral_title
- self.clamp_image_range = clamp_image_range
-
- LOGGER.info(f'{type(self)} init done')
-
- def _get_bins(self, mask_batch):
- batch_size = mask_batch.shape[0]
- area = mask_batch.view(batch_size, -1).mean(dim=-1).detach().cpu().numpy()
- bin_indices = np.clip(np.searchsorted(self.bin_edges, area) - 1, 0, self.bins_num - 1)
- return bin_indices
-
- def forward(self, batch: Dict[str, torch.Tensor]):
- """
- Calculate and accumulate metrics for batch. To finalize evaluation and obtain final metrics, call evaluation_end
- :param batch: batch dict with mandatory fields mask, image, inpainted (can be overriden by self.inpainted_key)
- """
- result = {}
- with torch.no_grad():
- image_batch, mask_batch, inpainted_batch = batch[self.image_key], batch['mask'], batch[self.inpainted_key]
- if self.clamp_image_range is not None:
- image_batch = torch.clamp(image_batch,
- min=self.clamp_image_range[0],
- max=self.clamp_image_range[1])
- self.groups.extend(self._get_bins(mask_batch))
-
- for score_name, score in self.scores.items():
- result[score_name] = score(inpainted_batch, image_batch, mask_batch)
- return result
-
- def process_batch(self, batch: Dict[str, torch.Tensor]):
- return self(batch)
-
- def evaluation_end(self, states=None):
- """:return: dict with (score_name, group_type) as keys, where group_type can be either 'overall' or
- name of the particular group arranged by area of mask (e.g. '10-20%')
- and score statistics for the group as values.
- """
- LOGGER.info(f'{type(self)}: evaluation_end called')
-
- self.groups = np.array(self.groups)
-
- results = {}
- for score_name, score in self.scores.items():
- LOGGER.info(f'Getting value of {score_name}')
- cur_states = [s[score_name] for s in states] if states is not None else None
- total_results, group_results = score.get_value(groups=self.groups, states=cur_states)
- LOGGER.info(f'Getting value of {score_name} done')
- results[(score_name, 'total')] = total_results
-
- for group_index, group_values in group_results.items():
- group_name = self.interval_names[group_index]
- results[(score_name, group_name)] = group_values
-
- if self.integral_func is not None:
- results[(self.integral_title, 'total')] = dict(mean=self.integral_func(results))
-
- LOGGER.info(f'{type(self)}: reset scores')
- self.groups = []
- for sc in self.scores.values():
- sc.reset()
- LOGGER.info(f'{type(self)}: reset scores done')
-
- LOGGER.info(f'{type(self)}: evaluation_end done')
- return results
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/SpiderImagePlugin.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/SpiderImagePlugin.py
deleted file mode 100644
index eac27e679bd2f18dd33d0ee2ff405c8eee4caecf..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/SpiderImagePlugin.py
+++ /dev/null
@@ -1,318 +0,0 @@
-#
-# The Python Imaging Library.
-#
-# SPIDER image file handling
-#
-# History:
-# 2004-08-02 Created BB
-# 2006-03-02 added save method
-# 2006-03-13 added support for stack images
-#
-# Copyright (c) 2004 by Health Research Inc. (HRI) RENSSELAER, NY 12144.
-# Copyright (c) 2004 by William Baxter.
-# Copyright (c) 2004 by Secret Labs AB.
-# Copyright (c) 2004 by Fredrik Lundh.
-#
-
-##
-# Image plugin for the Spider image format. This format is used
-# by the SPIDER software, in processing image data from electron
-# microscopy and tomography.
-##
-
-#
-# SpiderImagePlugin.py
-#
-# The Spider image format is used by SPIDER software, in processing
-# image data from electron microscopy and tomography.
-#
-# Spider home page:
-# https://spider.wadsworth.org/spider_doc/spider/docs/spider.html
-#
-# Details about the Spider image format:
-# https://spider.wadsworth.org/spider_doc/spider/docs/image_doc.html
-#
-import os
-import struct
-import sys
-
-from PIL import Image, ImageFile
-
-
-def isInt(f):
- try:
- i = int(f)
- if f - i == 0:
- return 1
- else:
- return 0
- except (ValueError, OverflowError):
- return 0
-
-
-iforms = [1, 3, -11, -12, -21, -22]
-
-
-# There is no magic number to identify Spider files, so just check a
-# series of header locations to see if they have reasonable values.
-# Returns no. of bytes in the header, if it is a valid Spider header,
-# otherwise returns 0
-
-
-def isSpiderHeader(t):
- h = (99,) + t # add 1 value so can use spider header index start=1
- # header values 1,2,5,12,13,22,23 should be integers
- for i in [1, 2, 5, 12, 13, 22, 23]:
- if not isInt(h[i]):
- return 0
- # check iform
- iform = int(h[5])
- if iform not in iforms:
- return 0
- # check other header values
- labrec = int(h[13]) # no. records in file header
- labbyt = int(h[22]) # total no. of bytes in header
- lenbyt = int(h[23]) # record length in bytes
- if labbyt != (labrec * lenbyt):
- return 0
- # looks like a valid header
- return labbyt
-
-
-def isSpiderImage(filename):
- with open(filename, "rb") as fp:
- f = fp.read(92) # read 23 * 4 bytes
- t = struct.unpack(">23f", f) # try big-endian first
- hdrlen = isSpiderHeader(t)
- if hdrlen == 0:
- t = struct.unpack("<23f", f) # little-endian
- hdrlen = isSpiderHeader(t)
- return hdrlen
-
-
-class SpiderImageFile(ImageFile.ImageFile):
- format = "SPIDER"
- format_description = "Spider 2D image"
- _close_exclusive_fp_after_loading = False
-
- def _open(self):
- # check header
- n = 27 * 4 # read 27 float values
- f = self.fp.read(n)
-
- try:
- self.bigendian = 1
- t = struct.unpack(">27f", f) # try big-endian first
- hdrlen = isSpiderHeader(t)
- if hdrlen == 0:
- self.bigendian = 0
- t = struct.unpack("<27f", f) # little-endian
- hdrlen = isSpiderHeader(t)
- if hdrlen == 0:
- msg = "not a valid Spider file"
- raise SyntaxError(msg)
- except struct.error as e:
- msg = "not a valid Spider file"
- raise SyntaxError(msg) from e
-
- h = (99,) + t # add 1 value : spider header index starts at 1
- iform = int(h[5])
- if iform != 1:
- msg = "not a Spider 2D image"
- raise SyntaxError(msg)
-
- self._size = int(h[12]), int(h[2]) # size in pixels (width, height)
- self.istack = int(h[24])
- self.imgnumber = int(h[27])
-
- if self.istack == 0 and self.imgnumber == 0:
- # stk=0, img=0: a regular 2D image
- offset = hdrlen
- self._nimages = 1
- elif self.istack > 0 and self.imgnumber == 0:
- # stk>0, img=0: Opening the stack for the first time
- self.imgbytes = int(h[12]) * int(h[2]) * 4
- self.hdrlen = hdrlen
- self._nimages = int(h[26])
- # Point to the first image in the stack
- offset = hdrlen * 2
- self.imgnumber = 1
- elif self.istack == 0 and self.imgnumber > 0:
- # stk=0, img>0: an image within the stack
- offset = hdrlen + self.stkoffset
- self.istack = 2 # So Image knows it's still a stack
- else:
- msg = "inconsistent stack header values"
- raise SyntaxError(msg)
-
- if self.bigendian:
- self.rawmode = "F;32BF"
- else:
- self.rawmode = "F;32F"
- self.mode = "F"
-
- self.tile = [("raw", (0, 0) + self.size, offset, (self.rawmode, 0, 1))]
- self._fp = self.fp # FIXME: hack
-
- @property
- def n_frames(self):
- return self._nimages
-
- @property
- def is_animated(self):
- return self._nimages > 1
-
- # 1st image index is zero (although SPIDER imgnumber starts at 1)
- def tell(self):
- if self.imgnumber < 1:
- return 0
- else:
- return self.imgnumber - 1
-
- def seek(self, frame):
- if self.istack == 0:
- msg = "attempt to seek in a non-stack file"
- raise EOFError(msg)
- if not self._seek_check(frame):
- return
- self.stkoffset = self.hdrlen + frame * (self.hdrlen + self.imgbytes)
- self.fp = self._fp
- self.fp.seek(self.stkoffset)
- self._open()
-
- # returns a byte image after rescaling to 0..255
- def convert2byte(self, depth=255):
- (minimum, maximum) = self.getextrema()
- m = 1
- if maximum != minimum:
- m = depth / (maximum - minimum)
- b = -m * minimum
- return self.point(lambda i, m=m, b=b: i * m + b).convert("L")
-
- # returns a ImageTk.PhotoImage object, after rescaling to 0..255
- def tkPhotoImage(self):
- from PIL import ImageTk
-
- return ImageTk.PhotoImage(self.convert2byte(), palette=256)
-
-
-# --------------------------------------------------------------------
-# Image series
-
-
-# given a list of filenames, return a list of images
-def loadImageSeries(filelist=None):
- """create a list of :py:class:`~PIL.Image.Image` objects for use in a montage"""
- if filelist is None or len(filelist) < 1:
- return
-
- imglist = []
- for img in filelist:
- if not os.path.exists(img):
- print(f"unable to find {img}")
- continue
- try:
- with Image.open(img) as im:
- im = im.convert2byte()
- except Exception:
- if not isSpiderImage(img):
- print(img + " is not a Spider image file")
- continue
- im.info["filename"] = img
- imglist.append(im)
- return imglist
-
-
-# --------------------------------------------------------------------
-# For saving images in Spider format
-
-
-def makeSpiderHeader(im):
- nsam, nrow = im.size
- lenbyt = nsam * 4 # There are labrec records in the header
- labrec = int(1024 / lenbyt)
- if 1024 % lenbyt != 0:
- labrec += 1
- labbyt = labrec * lenbyt
- nvalues = int(labbyt / 4)
- if nvalues < 23:
- return []
-
- hdr = []
- for i in range(nvalues):
- hdr.append(0.0)
-
- # NB these are Fortran indices
- hdr[1] = 1.0 # nslice (=1 for an image)
- hdr[2] = float(nrow) # number of rows per slice
- hdr[3] = float(nrow) # number of records in the image
- hdr[5] = 1.0 # iform for 2D image
- hdr[12] = float(nsam) # number of pixels per line
- hdr[13] = float(labrec) # number of records in file header
- hdr[22] = float(labbyt) # total number of bytes in header
- hdr[23] = float(lenbyt) # record length in bytes
-
- # adjust for Fortran indexing
- hdr = hdr[1:]
- hdr.append(0.0)
- # pack binary data into a string
- return [struct.pack("f", v) for v in hdr]
-
-
-def _save(im, fp, filename):
- if im.mode[0] != "F":
- im = im.convert("F")
-
- hdr = makeSpiderHeader(im)
- if len(hdr) < 256:
- msg = "Error creating Spider header"
- raise OSError(msg)
-
- # write the SPIDER header
- fp.writelines(hdr)
-
- rawmode = "F;32NF" # 32-bit native floating point
- ImageFile._save(im, fp, [("raw", (0, 0) + im.size, 0, (rawmode, 0, 1))])
-
-
-def _save_spider(im, fp, filename):
- # get the filename extension and register it with Image
- ext = os.path.splitext(filename)[1]
- Image.register_extension(SpiderImageFile.format, ext)
- _save(im, fp, filename)
-
-
-# --------------------------------------------------------------------
-
-
-Image.register_open(SpiderImageFile.format, SpiderImageFile)
-Image.register_save(SpiderImageFile.format, _save_spider)
-
-if __name__ == "__main__":
- if len(sys.argv) < 2:
- print("Syntax: python3 SpiderImagePlugin.py [infile] [outfile]")
- sys.exit()
-
- filename = sys.argv[1]
- if not isSpiderImage(filename):
- print("input image must be in Spider format")
- sys.exit()
-
- with Image.open(filename) as im:
- print("image: " + str(im))
- print("format: " + str(im.format))
- print("size: " + str(im.size))
- print("mode: " + str(im.mode))
- print("max, min: ", end=" ")
- print(im.getextrema())
-
- if len(sys.argv) > 2:
- outfile = sys.argv[2]
-
- # perform some image operation
- im = im.transpose(Image.Transpose.FLIP_LEFT_RIGHT)
- print(
- f"saving a flipped version of {os.path.basename(filename)} "
- f"as {outfile} "
- )
- im.save(outfile, SpiderImageFile.format)
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/contourpy/util/__init__.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/contourpy/util/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-368270cd.css b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-368270cd.css
deleted file mode 100644
index 3968f6c1a3f8d685db4eabfbc029d0a99dbb5626..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-368270cd.css
+++ /dev/null
@@ -1 +0,0 @@
-@font-face{font-family:KaTeX_AMS;font-style:normal;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_AMS-Regular-0cdd387c.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_AMS-Regular-30da91e8.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_AMS-Regular-68534840.ttf) format("truetype")}@font-face{font-family:KaTeX_Caligraphic;font-style:normal;font-weight:700;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Caligraphic-Bold-de7701e4.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Caligraphic-Bold-1ae6bd74.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Caligraphic-Bold-07d8e303.ttf) format("truetype")}@font-face{font-family:KaTeX_Caligraphic;font-style:normal;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Caligraphic-Regular-5d53e70a.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Caligraphic-Regular-3398dd02.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Caligraphic-Regular-ed0b7437.ttf) format("truetype")}@font-face{font-family:KaTeX_Fraktur;font-style:normal;font-weight:700;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Fraktur-Bold-74444efd.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Fraktur-Bold-9be7ceb8.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Fraktur-Bold-9163df9c.ttf) format("truetype")}@font-face{font-family:KaTeX_Fraktur;font-style:normal;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Fraktur-Regular-51814d27.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Fraktur-Regular-5e28753b.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Fraktur-Regular-1e6f9579.ttf) format("truetype")}@font-face{font-family:KaTeX_Main;font-style:normal;font-weight:700;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Main-Bold-0f60d1b8.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Main-Bold-c76c5d69.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Main-Bold-138ac28d.ttf) format("truetype")}@font-face{font-family:KaTeX_Main;font-style:italic;font-weight:700;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Main-BoldItalic-99cd42a3.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Main-BoldItalic-a6f7ec0d.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Main-BoldItalic-70ee1f64.ttf) format("truetype")}@font-face{font-family:KaTeX_Main;font-style:italic;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Main-Italic-97479ca6.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Main-Italic-f1d6ef86.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Main-Italic-0d85ae7c.ttf) format("truetype")}@font-face{font-family:KaTeX_Main;font-style:normal;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Main-Regular-c2342cd8.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Main-Regular-c6368d87.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Main-Regular-d0332f52.ttf) format("truetype")}@font-face{font-family:KaTeX_Math;font-style:italic;font-weight:700;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Math-BoldItalic-dc47344d.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Math-BoldItalic-850c0af5.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Math-BoldItalic-f9377ab0.ttf) format("truetype")}@font-face{font-family:KaTeX_Math;font-style:italic;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Math-Italic-7af58c5e.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Math-Italic-8a8d2445.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Math-Italic-08ce98e5.ttf) format("truetype")}@font-face{font-family:KaTeX_SansSerif;font-style:normal;font-weight:700;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_SansSerif-Bold-e99ae511.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_SansSerif-Bold-ece03cfd.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_SansSerif-Bold-1ece03f7.ttf) format("truetype")}@font-face{font-family:KaTeX_SansSerif;font-style:italic;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_SansSerif-Italic-00b26ac8.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_SansSerif-Italic-91ee6750.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_SansSerif-Italic-3931dd81.ttf) format("truetype")}@font-face{font-family:KaTeX_SansSerif;font-style:normal;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_SansSerif-Regular-68e8c73e.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_SansSerif-Regular-11e4dc8a.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_SansSerif-Regular-f36ea897.ttf) format("truetype")}@font-face{font-family:KaTeX_Script;font-style:normal;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Script-Regular-036d4e95.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Script-Regular-d96cdf2b.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Script-Regular-1c67f068.ttf) format("truetype")}@font-face{font-family:KaTeX_Size1;font-style:normal;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Size1-Regular-6b47c401.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Size1-Regular-c943cc98.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Size1-Regular-95b6d2f1.ttf) format("truetype")}@font-face{font-family:KaTeX_Size2;font-style:normal;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Size2-Regular-d04c5421.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Size2-Regular-2014c523.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Size2-Regular-a6b2099f.ttf) format("truetype")}@font-face{font-family:KaTeX_Size3;font-style:normal;font-weight:400;src:url(data:font/woff2;base64,d09GMgABAAAAAA4oAA4AAAAAHbQAAA3TAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAABmAAgRQIDgmcDBEICo1oijYBNgIkA14LMgAEIAWJAAeBHAyBHBvbGiMRdnO0IkRRkiYDgr9KsJ1NUAf2kILNxgUmgqIgq1P89vcbIcmsQbRps3vCcXdYOKSWEPEKgZgQkprQQsxIXUgq0DqpGKmIvrgkeVGtEQD9DzAO29fM9jYhxZEsL2FeURH2JN4MIcTdO049NCVdxQ/w9NrSYFEBKTDKpLKfNkCGDc1RwjZLQcm3vqJ2UW9Xfa3tgAHz6ivp6vgC2yD4/6352ndnN0X0TL7seypkjZlMsjmZnf0Mm5Q+JykRWQBKCVCVPbARPXWyQtb5VgLB6Biq7/Uixcj2WGqdI8tGSgkuRG+t910GKP2D7AQH0DB9FMDW/obJZ8giFI3Wg8Cvevz0M+5m0rTh7XDBlvo9Y4vm13EXmfttwI4mBo1EG15fxJhUiCLbiiyCf/ZA6MFAhg3pGIZGdGIVjtPn6UcMk9A/UUr9PhoNsCENw1APAq0gpH73e+M+0ueyHbabc3vkbcdtzcf/fiy+NxQEjf9ud/ELBHAXJ0nk4z+MXH2Ev/kWyV4k7SkvpPc9Qr38F6RPWnM9cN6DJ0AdD1BhtgABtmoRoFCvPsBAumNm6soZG2Gk5GyVTo2sJncSyp0jQTYoR6WDvTwaaEcHsxHfvuWhHA3a6bN7twRKtcGok6NsCi7jYRrM2jExsUFMxMQYuJbMhuWNOumEJy9hi29Dmg5zMp/A5+hhPG19j1vBrq8JTLr8ki5VLPmG/PynJHVul440bxg5xuymHUFPBshC+nA9I1FmwbRBTNHAcik3Oae0cxKoI3MOriM42UrPe51nsaGxJ+WfXubAsP84aabUlQSJ1IiE0iPETLUU4CATgfXSCSpuRFRmCGbO+wSpAnzaeaCYW1VNEysRtuXCEL1kUFUbbtMv3Tilt/1c11jt3Q5bbMa84cpWipp8Elw3MZhOHsOlwwVUQM3lAR35JiFQbaYCRnMF2lxAWoOg2gyoIV4PouX8HytNIfLhqpJtXB4vjiViUI8IJ7bkC4ikkQvKksnOTKICwnqWSZ9YS5f0WCxmpgjbIq7EJcM4aI2nmhLNY2JIUgOjXZFWBHb+x5oh6cwb0Tv1ackHdKi0I9OO2wE9aogIOn540CCCziyhN+IaejtgAONKznHlHyutPrHGwCx9S6B8kfS4Mfi4Eyv7OU730bT1SCBjt834cXsf43zVjPUqqJjgrjeGnBxSG4aYAKFuVbeCfkDIjAqMb6yLNIbCuvXhMH2/+k2vkNpkORhR59N1CkzoOENvneIosjYmuTxlhUzaGEJQ/iWqx4dmwpmKjrwTiTGTCVozNAYqk/zXOndWxuWSmJkQpJw3pK5KX6QrLt5LATMqpmPAQhkhK6PUjzHUn7E0gHE0kPE0iKkolgkUx9SZmVAdDgpffdyJKg3k7VmzYGCwVXGz/tXmkOIp+vcWs+EMuhhvN0h9uhfzWJziBQmCREGSIFmQIkgVpAnSBRmC//6hkLZwaVhwxlrJSOdqlFtOYxlau9F2QN5Y98xmIAsiM1HVp2VFX+DHHGg6Ecjh3vmqtidX3qHI2qycTk/iwxSt5UzTmEP92ZBnEWTk4Mx8Mpl78ZDokxg/KWb+Q0QkvdKVmq3TMW+RXEgrsziSAfNXFMhDc60N5N9jQzjfO0kBKpUZl0ZmwJ41j/B9Hz6wmRaJB84niNmQrzp9eSlQCDDzazGDdVi3P36VZQ+Jy4f9UBNp+3zTjqI4abaFAm+GShVaXlsGdF3FYzZcDI6cori4kMxUECl9IjJZpzkvitAoxKue+90pDMvcKRxLl53TmOKCmV/xRolNKSqqUxc6LStOETmFOiLZZptlZepcKiAzteG8PEdpnQpbOMNcMsR4RR2Bs0cKFEvSmIjAFcnarqwUL4lDhHmnVkwu1IwshbiCcgvOheZuYyOteufZZwlcTlLgnZ3o/WcYdzZHW/WGaqaVfmTZ1aWCceJjkbZqsfbkOtcFlUZM/jy+hXHDbaUobWqqXaeWobbLO99yG5N3U4wxco0rQGGcOLASFMXeJoham8M+/x6O2WywK2l4HGbq1CoUyC/IZikQhdq3SiuNrvAEj0AVu9x2x3lp/xWzahaxidezFVtdcb5uEnzyl0ZmYiuKI0exvCd4Xc9CV1KB0db00z92wDPde0kukbvZIWN6jUWFTmPIC/Y4UPCm8UfDTFZpZNon1qLFTkBhxzB+FjQRA2Q/YRJT8pQigslMaUpFyAG8TMlXigiqmAZX4xgijKjRlGpLE0GdplRfCaJo0JQaSxNBk6ZmMzcya0FmrcisDdn0Q3HI2sWSppYigmlM1XT/kLQZSNpMJG0WkjYbSZuDpM1F0uYhFc1HxU4m1QJjDK6iL0S5uSj5rgXc3RejEigtcRBtqYPQsiTskmO5vosV+q4VGIKbOkDg0jtRrq+Em1YloaTFar3EGr1EUC8R0kus1Uus00usL97ABr2BjXoDm/QGNhuWtMVBKOwg/i78lT7hBsAvDmwHc/ao3vmUbBmhjeYySZNWvGkfZAgISDSaDo1SVpzGDsAEkF8B+gEapViUoZgUWXcRIGFZNm6gWbAKk0bp0k1MHG9fLYtV4iS2SmLEQFARzRcnf9PUS0LVn05/J9MiRRBU3v2IrvW974v4N00L7ZMk0wXP1409CHo/an8zTRHD3eSJ6m8D4YMkZNl3M79sqeuAsr/m3f+8/yl7A50aiAEJgeBeMWzu7ui9UfUBCe2TIqZIoOd/3/udRBOQidQZUERzb2/VwZN1H/Sju82ew2H2Wfr6qvfVf3hqwDvAIpkQVFy4B9Pe9e4/XvPeceu7h3dvO56iJPf0+A6cqA2ip18ER+iFgggiuOkvj24bby0N9j2UHIkgqIt+sVgfodC4YghLSMjSZbH0VR/6dMDrYJeKHilKTemt6v6kvzvn3/RrdWtr0GoN/xL+Sex/cPYLUpepx9cz/D46UPU5KXgAQa+NDps1v6J3xP1i2HtaDB0M9aX2deA7SYff//+gUCovMmIK/qfsFcOk+4Y5ZN97XlG6zebqtMbKgeRFi51vnxTQYBUik2rS/Cn6PC8ADR8FGxsRPB82dzfND90gIcshOcYUkfjherBz53odpm6TP8txlwOZ71xmfHHOvq053qFF/MRlS3jP0ELudrf2OeN8DHvp6ZceLe8qKYvWz/7yp0u4dKPfli3CYq0O13Ih71mylJ80tOi10On8wi+F4+LWgDPeJ30msSQt9/vkmHq9/Lvo2b461mP801v3W4xTcs6CbvF9UDdrSt+A8OUbpSh55qAUFXWznBBfdeJ8a4d7ugT5tvxUza3h9m4H7ptTqiG4z0g5dc0X29OcGlhpGFMpQo9ytTS+NViZpNdvU4kWx+LKxNY10kQ1yqGXrhe4/1nvP7E+nd5A92TtaRplbHSqoIdOqtRWti+fkB5/n1+/VvCmz12pG1kpQWsfi1ftlBobm0bpngs16CHkbIwdLnParxtTV3QYRlfJ0KFskH7pdN/YDn+yRuSd7sNH3aO0DYPggk6uWuXrfOc+fa3VTxFVvKaNxHsiHmsXyCLIE5yuOeN3/Jdf8HBL/5M6shjyhxHx9BjB1O0+4NLOnjLLSxwO7ukN4jMbOIcD879KLSi6Pk61Oqm2377n8079PXEEQ7cy7OKEC9nbpet118fxweTafpt69x/Bt8UqGzNQt7aelpc44dn5cqhwf71+qKp/Zf/+a0zcizOUWpl/iBcSXip0pplkatCchoH5c5aUM8I7/dWxAej8WicPL1URFZ9BDJelUwEwTkGqUhgSlydVes95YdXvhh9Gfz/aeFWvgVb4tuLbcv4+wLdutVZv/cUonwBD/6eDlE0aSiKK/uoH3+J1wDE/jMVqY2ysGufN84oIXB0sPzy8ollX/LegY74DgJXJR57sn+VGza0x3DnuIgABFM15LmajjjsNlYj+JEZGbuRYcAMOWxFkPN2w6Wd46xo4gVWQR/X4lyI/R6K/YK0110GzudPRW7Y+UOBGTfNNzHeYT0fiH0taunBpq9HEW8OKSaBGj21L0MqenEmNRWBAWDWAk4CpNoEZJ2tTaPFgbQYj8HxtFilErs3BTRwT8uO1NXQaWfIotchmPkAF5mMBAliEmZiOGVgCG9LgRzpscMAOOwowlT3JhusdazXGSC/hxR3UlmWVwWHpOIKheqONvjyhSiTHIkVUco5bnji8m//zL7PKaT1Vl5I6UE609f+gkr6MZKVyKc7zJRmCahLsdlyA5fdQkRSan9LgnnLEyGSkaKJCJog0wAgvepWBt80+1yKln1bMVtCljfNWDueKLsWwaEbBSfSPTEmVRsUcYYMnEjcjeyCZzBXK9E9BYBXLKjOSpUDR+nEV3TFSUdQaz+ot98QxgXwx0GQ+EEUAKB2qZPkQQ0GqFD8UPFMqyaCHM24BZmSGic9EYMagKizOw9Hz50DMrDLrqqLkTAhplMictiCAx5S3BIUQdeJeLnBy2CNtMfz6cV4u8XKoFZQesbf9YZiIERiHjaNodDW6LgcirX/mPnJIkBGDUpTBhSa0EIr38D5hCIszhCM8URGBqImoWjpvpt1ebu/v3Gl3qJfMnNM+9V+kiRFyROTPHQWOcs1dNW94/ukKMPZBvDi55i5CttdeJz84DLngLqjcdwEZ87bFFR8CIG35OAkDVN6VRDZ7aq67NteYqZ2lpT8oYB2CytoBd6VuAx4WgiAsnuj3WohG+LugzXiQRDeM3XYXlULv4dp5VFYC) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Size3-Regular-6ab6b62e.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Size3-Regular-500e04d5.ttf) format("truetype")}@font-face{font-family:KaTeX_Size4;font-style:normal;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Size4-Regular-a4af7d41.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Size4-Regular-99f9c675.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Size4-Regular-c647367d.ttf) format("truetype")}@font-face{font-family:KaTeX_Typewriter;font-style:normal;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Typewriter-Regular-71d517d6.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Typewriter-Regular-e14fed02.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Typewriter-Regular-f01f3e87.ttf) format("truetype")}.gradio-container-3-33-1 .katex{text-rendering:auto;font: 1.21em KaTeX_Main,Times New Roman,serif;line-height:1.2;text-indent:0}.gradio-container-3-33-1 .katex *{-ms-high-contrast-adjust:none!important;border-color:currentColor}.gradio-container-3-33-1 .katex .katex-version:after{content:"0.16.7"}.gradio-container-3-33-1 .katex .katex-mathml{clip:rect(1px,1px,1px,1px);border:0;height:1px;overflow:hidden;padding:0;position:absolute;width:1px}.gradio-container-3-33-1 .katex .katex-html>.newline{display:block}.gradio-container-3-33-1 .katex .base{position:relative;white-space:nowrap;width:-webkit-min-content;width:-moz-min-content;width:min-content}.gradio-container-3-33-1 .katex .base,.gradio-container-3-33-1 .katex .strut{display:inline-block}.gradio-container-3-33-1 .katex .textbf{font-weight:700}.gradio-container-3-33-1 .katex .textit{font-style:italic}.gradio-container-3-33-1 .katex .textrm{font-family:KaTeX_Main}.gradio-container-3-33-1 .katex .textsf{font-family:KaTeX_SansSerif}.gradio-container-3-33-1 .katex .texttt{font-family:KaTeX_Typewriter}.gradio-container-3-33-1 .katex .mathnormal{font-family:KaTeX_Math;font-style:italic}.gradio-container-3-33-1 .katex .mathit{font-family:KaTeX_Main;font-style:italic}.gradio-container-3-33-1 .katex .mathrm{font-style:normal}.gradio-container-3-33-1 .katex .mathbf{font-family:KaTeX_Main;font-weight:700}.gradio-container-3-33-1 .katex .boldsymbol{font-family:KaTeX_Math;font-style:italic;font-weight:700}.gradio-container-3-33-1 .katex .amsrm,.gradio-container-3-33-1 .katex .mathbb,.gradio-container-3-33-1 .katex .textbb{font-family:KaTeX_AMS}.gradio-container-3-33-1 .katex .mathcal{font-family:KaTeX_Caligraphic}.gradio-container-3-33-1 .katex .mathfrak,.gradio-container-3-33-1 .katex .textfrak{font-family:KaTeX_Fraktur}.gradio-container-3-33-1 .katex .mathtt{font-family:KaTeX_Typewriter}.gradio-container-3-33-1 .katex .mathscr,.gradio-container-3-33-1 .katex .textscr{font-family:KaTeX_Script}.gradio-container-3-33-1 .katex .mathsf,.gradio-container-3-33-1 .katex .textsf{font-family:KaTeX_SansSerif}.gradio-container-3-33-1 .katex .mathboldsf,.gradio-container-3-33-1 .katex .textboldsf{font-family:KaTeX_SansSerif;font-weight:700}.gradio-container-3-33-1 .katex .mathitsf,.gradio-container-3-33-1 .katex .textitsf{font-family:KaTeX_SansSerif;font-style:italic}.gradio-container-3-33-1 .katex .mainrm{font-family:KaTeX_Main;font-style:normal}.gradio-container-3-33-1 .katex .vlist-t{border-collapse:collapse;display:inline-table;table-layout:fixed}.gradio-container-3-33-1 .katex .vlist-r{display:table-row}.gradio-container-3-33-1 .katex .vlist{display:table-cell;position:relative;vertical-align:bottom}.gradio-container-3-33-1 .katex .vlist>span{display:block;height:0;position:relative}.gradio-container-3-33-1 .katex .vlist>span>span{display:inline-block}.gradio-container-3-33-1 .katex .vlist>span>.pstrut{overflow:hidden;width:0}.gradio-container-3-33-1 .katex .vlist-t2{margin-right:-2px}.gradio-container-3-33-1 .katex .vlist-s{display:table-cell;font-size:1px;min-width:2px;vertical-align:bottom;width:2px}.gradio-container-3-33-1 .katex .vbox{align-items:baseline;display:inline-flex;flex-direction:column}.gradio-container-3-33-1 .katex .hbox{width:100%}.gradio-container-3-33-1 .katex .hbox,.gradio-container-3-33-1 .katex .thinbox{display:inline-flex;flex-direction:row}.gradio-container-3-33-1 .katex .thinbox{max-width:0;width:0}.gradio-container-3-33-1 .katex .msupsub{text-align:left}.gradio-container-3-33-1 .katex .mfrac>span>span{text-align:center}.gradio-container-3-33-1 .katex .mfrac .frac-line{border-bottom-style:solid;display:inline-block;width:100%}.gradio-container-3-33-1 .katex .hdashline,.gradio-container-3-33-1 .katex .hline,.gradio-container-3-33-1 .katex .mfrac .frac-line,.gradio-container-3-33-1 .katex .overline .overline-line,.gradio-container-3-33-1 .katex .rule,.gradio-container-3-33-1 .katex .underline .underline-line{min-height:1px}.gradio-container-3-33-1 .katex .mspace{display:inline-block}.gradio-container-3-33-1 .katex .clap,.gradio-container-3-33-1 .katex .llap,.gradio-container-3-33-1 .katex .rlap{position:relative;width:0}.gradio-container-3-33-1 .katex .clap>.inner,.gradio-container-3-33-1 .katex .llap>.inner,.gradio-container-3-33-1 .katex .rlap>.inner{position:absolute}.gradio-container-3-33-1 .katex .clap>.fix,.gradio-container-3-33-1 .katex .llap>.fix,.gradio-container-3-33-1 .katex .rlap>.fix{display:inline-block}.gradio-container-3-33-1 .katex .llap>.inner{right:0}.gradio-container-3-33-1 .katex .clap>.inner,.gradio-container-3-33-1 .katex .rlap>.inner{left:0}.gradio-container-3-33-1 .katex .clap>.inner>span{margin-left:-50%;margin-right:50%}.gradio-container-3-33-1 .katex .rule{border:0 solid;display:inline-block;position:relative}.gradio-container-3-33-1 .katex .hline,.gradio-container-3-33-1 .katex .overline .overline-line,.gradio-container-3-33-1 .katex .underline .underline-line{border-bottom-style:solid;display:inline-block;width:100%}.gradio-container-3-33-1 .katex .hdashline{border-bottom-style:dashed;display:inline-block;width:100%}.gradio-container-3-33-1 .katex .sqrt>.root{margin-left:.27777778em;margin-right:-.55555556em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size1.size1,.gradio-container-3-33-1 .katex .sizing.reset-size1.size1{font-size:1em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size1.size2,.gradio-container-3-33-1 .katex .sizing.reset-size1.size2{font-size:1.2em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size1.size3,.gradio-container-3-33-1 .katex .sizing.reset-size1.size3{font-size:1.4em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size1.size4,.gradio-container-3-33-1 .katex .sizing.reset-size1.size4{font-size:1.6em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size1.size5,.gradio-container-3-33-1 .katex .sizing.reset-size1.size5{font-size:1.8em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size1.size6,.gradio-container-3-33-1 .katex .sizing.reset-size1.size6{font-size:2em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size1.size7,.gradio-container-3-33-1 .katex .sizing.reset-size1.size7{font-size:2.4em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size1.size8,.gradio-container-3-33-1 .katex .sizing.reset-size1.size8{font-size:2.88em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size1.size9,.gradio-container-3-33-1 .katex .sizing.reset-size1.size9{font-size:3.456em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size1.size10,.gradio-container-3-33-1 .katex .sizing.reset-size1.size10{font-size:4.148em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size1.size11,.gradio-container-3-33-1 .katex .sizing.reset-size1.size11{font-size:4.976em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size2.size1,.gradio-container-3-33-1 .katex .sizing.reset-size2.size1{font-size:.83333333em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size2.size2,.gradio-container-3-33-1 .katex .sizing.reset-size2.size2{font-size:1em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size2.size3,.gradio-container-3-33-1 .katex .sizing.reset-size2.size3{font-size:1.16666667em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size2.size4,.gradio-container-3-33-1 .katex .sizing.reset-size2.size4{font-size:1.33333333em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size2.size5,.gradio-container-3-33-1 .katex .sizing.reset-size2.size5{font-size:1.5em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size2.size6,.gradio-container-3-33-1 .katex .sizing.reset-size2.size6{font-size:1.66666667em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size2.size7,.gradio-container-3-33-1 .katex .sizing.reset-size2.size7{font-size:2em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size2.size8,.gradio-container-3-33-1 .katex .sizing.reset-size2.size8{font-size:2.4em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size2.size9,.gradio-container-3-33-1 .katex .sizing.reset-size2.size9{font-size:2.88em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size2.size10,.gradio-container-3-33-1 .katex .sizing.reset-size2.size10{font-size:3.45666667em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size2.size11,.gradio-container-3-33-1 .katex .sizing.reset-size2.size11{font-size:4.14666667em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size3.size1,.gradio-container-3-33-1 .katex .sizing.reset-size3.size1{font-size:.71428571em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size3.size2,.gradio-container-3-33-1 .katex .sizing.reset-size3.size2{font-size:.85714286em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size3.size3,.gradio-container-3-33-1 .katex .sizing.reset-size3.size3{font-size:1em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size3.size4,.gradio-container-3-33-1 .katex .sizing.reset-size3.size4{font-size:1.14285714em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size3.size5,.gradio-container-3-33-1 .katex .sizing.reset-size3.size5{font-size:1.28571429em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size3.size6,.gradio-container-3-33-1 .katex .sizing.reset-size3.size6{font-size:1.42857143em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size3.size7,.gradio-container-3-33-1 .katex .sizing.reset-size3.size7{font-size:1.71428571em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size3.size8,.gradio-container-3-33-1 .katex .sizing.reset-size3.size8{font-size:2.05714286em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size3.size9,.gradio-container-3-33-1 .katex .sizing.reset-size3.size9{font-size:2.46857143em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size3.size10,.gradio-container-3-33-1 .katex .sizing.reset-size3.size10{font-size:2.96285714em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size3.size11,.gradio-container-3-33-1 .katex .sizing.reset-size3.size11{font-size:3.55428571em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size4.size1,.gradio-container-3-33-1 .katex .sizing.reset-size4.size1{font-size:.625em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size4.size2,.gradio-container-3-33-1 .katex .sizing.reset-size4.size2{font-size:.75em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size4.size3,.gradio-container-3-33-1 .katex .sizing.reset-size4.size3{font-size:.875em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size4.size4,.gradio-container-3-33-1 .katex .sizing.reset-size4.size4{font-size:1em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size4.size5,.gradio-container-3-33-1 .katex .sizing.reset-size4.size5{font-size:1.125em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size4.size6,.gradio-container-3-33-1 .katex .sizing.reset-size4.size6{font-size:1.25em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size4.size7,.gradio-container-3-33-1 .katex .sizing.reset-size4.size7{font-size:1.5em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size4.size8,.gradio-container-3-33-1 .katex .sizing.reset-size4.size8{font-size:1.8em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size4.size9,.gradio-container-3-33-1 .katex .sizing.reset-size4.size9{font-size:2.16em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size4.size10,.gradio-container-3-33-1 .katex .sizing.reset-size4.size10{font-size:2.5925em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size4.size11,.gradio-container-3-33-1 .katex .sizing.reset-size4.size11{font-size:3.11em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size5.size1,.gradio-container-3-33-1 .katex .sizing.reset-size5.size1{font-size:.55555556em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size5.size2,.gradio-container-3-33-1 .katex .sizing.reset-size5.size2{font-size:.66666667em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size5.size3,.gradio-container-3-33-1 .katex .sizing.reset-size5.size3{font-size:.77777778em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size5.size4,.gradio-container-3-33-1 .katex .sizing.reset-size5.size4{font-size:.88888889em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size5.size5,.gradio-container-3-33-1 .katex .sizing.reset-size5.size5{font-size:1em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size5.size6,.gradio-container-3-33-1 .katex .sizing.reset-size5.size6{font-size:1.11111111em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size5.size7,.gradio-container-3-33-1 .katex .sizing.reset-size5.size7{font-size:1.33333333em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size5.size8,.gradio-container-3-33-1 .katex .sizing.reset-size5.size8{font-size:1.6em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size5.size9,.gradio-container-3-33-1 .katex .sizing.reset-size5.size9{font-size:1.92em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size5.size10,.gradio-container-3-33-1 .katex .sizing.reset-size5.size10{font-size:2.30444444em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size5.size11,.gradio-container-3-33-1 .katex .sizing.reset-size5.size11{font-size:2.76444444em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size6.size1,.gradio-container-3-33-1 .katex .sizing.reset-size6.size1{font-size:.5em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size6.size2,.gradio-container-3-33-1 .katex .sizing.reset-size6.size2{font-size:.6em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size6.size3,.gradio-container-3-33-1 .katex .sizing.reset-size6.size3{font-size:.7em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size6.size4,.gradio-container-3-33-1 .katex .sizing.reset-size6.size4{font-size:.8em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size6.size5,.gradio-container-3-33-1 .katex .sizing.reset-size6.size5{font-size:.9em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size6.size6,.gradio-container-3-33-1 .katex .sizing.reset-size6.size6{font-size:1em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size6.size7,.gradio-container-3-33-1 .katex .sizing.reset-size6.size7{font-size:1.2em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size6.size8,.gradio-container-3-33-1 .katex .sizing.reset-size6.size8{font-size:1.44em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size6.size9,.gradio-container-3-33-1 .katex .sizing.reset-size6.size9{font-size:1.728em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size6.size10,.gradio-container-3-33-1 .katex .sizing.reset-size6.size10{font-size:2.074em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size6.size11,.gradio-container-3-33-1 .katex .sizing.reset-size6.size11{font-size:2.488em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size7.size1,.gradio-container-3-33-1 .katex .sizing.reset-size7.size1{font-size:.41666667em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size7.size2,.gradio-container-3-33-1 .katex .sizing.reset-size7.size2{font-size:.5em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size7.size3,.gradio-container-3-33-1 .katex .sizing.reset-size7.size3{font-size:.58333333em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size7.size4,.gradio-container-3-33-1 .katex .sizing.reset-size7.size4{font-size:.66666667em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size7.size5,.gradio-container-3-33-1 .katex .sizing.reset-size7.size5{font-size:.75em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size7.size6,.gradio-container-3-33-1 .katex .sizing.reset-size7.size6{font-size:.83333333em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size7.size7,.gradio-container-3-33-1 .katex .sizing.reset-size7.size7{font-size:1em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size7.size8,.gradio-container-3-33-1 .katex .sizing.reset-size7.size8{font-size:1.2em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size7.size9,.gradio-container-3-33-1 .katex .sizing.reset-size7.size9{font-size:1.44em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size7.size10,.gradio-container-3-33-1 .katex .sizing.reset-size7.size10{font-size:1.72833333em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size7.size11,.gradio-container-3-33-1 .katex .sizing.reset-size7.size11{font-size:2.07333333em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size8.size1,.gradio-container-3-33-1 .katex .sizing.reset-size8.size1{font-size:.34722222em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size8.size2,.gradio-container-3-33-1 .katex .sizing.reset-size8.size2{font-size:.41666667em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size8.size3,.gradio-container-3-33-1 .katex .sizing.reset-size8.size3{font-size:.48611111em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size8.size4,.gradio-container-3-33-1 .katex .sizing.reset-size8.size4{font-size:.55555556em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size8.size5,.gradio-container-3-33-1 .katex .sizing.reset-size8.size5{font-size:.625em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size8.size6,.gradio-container-3-33-1 .katex .sizing.reset-size8.size6{font-size:.69444444em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size8.size7,.gradio-container-3-33-1 .katex .sizing.reset-size8.size7{font-size:.83333333em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size8.size8,.gradio-container-3-33-1 .katex .sizing.reset-size8.size8{font-size:1em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size8.size9,.gradio-container-3-33-1 .katex .sizing.reset-size8.size9{font-size:1.2em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size8.size10,.gradio-container-3-33-1 .katex .sizing.reset-size8.size10{font-size:1.44027778em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size8.size11,.gradio-container-3-33-1 .katex .sizing.reset-size8.size11{font-size:1.72777778em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size9.size1,.gradio-container-3-33-1 .katex .sizing.reset-size9.size1{font-size:.28935185em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size9.size2,.gradio-container-3-33-1 .katex .sizing.reset-size9.size2{font-size:.34722222em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size9.size3,.gradio-container-3-33-1 .katex .sizing.reset-size9.size3{font-size:.40509259em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size9.size4,.gradio-container-3-33-1 .katex .sizing.reset-size9.size4{font-size:.46296296em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size9.size5,.gradio-container-3-33-1 .katex .sizing.reset-size9.size5{font-size:.52083333em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size9.size6,.gradio-container-3-33-1 .katex .sizing.reset-size9.size6{font-size:.5787037em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size9.size7,.gradio-container-3-33-1 .katex .sizing.reset-size9.size7{font-size:.69444444em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size9.size8,.gradio-container-3-33-1 .katex .sizing.reset-size9.size8{font-size:.83333333em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size9.size9,.gradio-container-3-33-1 .katex .sizing.reset-size9.size9{font-size:1em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size9.size10,.gradio-container-3-33-1 .katex .sizing.reset-size9.size10{font-size:1.20023148em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size9.size11,.gradio-container-3-33-1 .katex .sizing.reset-size9.size11{font-size:1.43981481em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size10.size1,.gradio-container-3-33-1 .katex .sizing.reset-size10.size1{font-size:.24108004em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size10.size2,.gradio-container-3-33-1 .katex .sizing.reset-size10.size2{font-size:.28929605em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size10.size3,.gradio-container-3-33-1 .katex .sizing.reset-size10.size3{font-size:.33751205em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size10.size4,.gradio-container-3-33-1 .katex .sizing.reset-size10.size4{font-size:.38572806em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size10.size5,.gradio-container-3-33-1 .katex .sizing.reset-size10.size5{font-size:.43394407em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size10.size6,.gradio-container-3-33-1 .katex .sizing.reset-size10.size6{font-size:.48216008em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size10.size7,.gradio-container-3-33-1 .katex .sizing.reset-size10.size7{font-size:.57859209em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size10.size8,.gradio-container-3-33-1 .katex .sizing.reset-size10.size8{font-size:.69431051em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size10.size9,.gradio-container-3-33-1 .katex .sizing.reset-size10.size9{font-size:.83317261em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size10.size10,.gradio-container-3-33-1 .katex .sizing.reset-size10.size10{font-size:1em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size10.size11,.gradio-container-3-33-1 .katex .sizing.reset-size10.size11{font-size:1.19961427em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size11.size1,.gradio-container-3-33-1 .katex .sizing.reset-size11.size1{font-size:.20096463em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size11.size2,.gradio-container-3-33-1 .katex .sizing.reset-size11.size2{font-size:.24115756em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size11.size3,.gradio-container-3-33-1 .katex .sizing.reset-size11.size3{font-size:.28135048em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size11.size4,.gradio-container-3-33-1 .katex .sizing.reset-size11.size4{font-size:.32154341em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size11.size5,.gradio-container-3-33-1 .katex .sizing.reset-size11.size5{font-size:.36173633em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size11.size6,.gradio-container-3-33-1 .katex .sizing.reset-size11.size6{font-size:.40192926em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size11.size7,.gradio-container-3-33-1 .katex .sizing.reset-size11.size7{font-size:.48231511em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size11.size8,.gradio-container-3-33-1 .katex .sizing.reset-size11.size8{font-size:.57877814em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size11.size9,.gradio-container-3-33-1 .katex .sizing.reset-size11.size9{font-size:.69453376em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size11.size10,.gradio-container-3-33-1 .katex .sizing.reset-size11.size10{font-size:.83360129em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size11.size11,.gradio-container-3-33-1 .katex .sizing.reset-size11.size11{font-size:1em}.gradio-container-3-33-1 .katex .delimsizing.size1{font-family:KaTeX_Size1}.gradio-container-3-33-1 .katex .delimsizing.size2{font-family:KaTeX_Size2}.gradio-container-3-33-1 .katex .delimsizing.size3{font-family:KaTeX_Size3}.gradio-container-3-33-1 .katex .delimsizing.size4{font-family:KaTeX_Size4}.gradio-container-3-33-1 .katex .delimsizing.mult .delim-size1>span{font-family:KaTeX_Size1}.gradio-container-3-33-1 .katex .delimsizing.mult .delim-size4>span{font-family:KaTeX_Size4}.gradio-container-3-33-1 .katex .nulldelimiter{display:inline-block;width:.12em}.gradio-container-3-33-1 .katex .delimcenter,.gradio-container-3-33-1 .katex .op-symbol{position:relative}.gradio-container-3-33-1 .katex .op-symbol.small-op{font-family:KaTeX_Size1}.gradio-container-3-33-1 .katex .op-symbol.large-op{font-family:KaTeX_Size2}.gradio-container-3-33-1 .katex .accent>.vlist-t,.gradio-container-3-33-1 .katex .op-limits>.vlist-t{text-align:center}.gradio-container-3-33-1 .katex .accent .accent-body{position:relative}.gradio-container-3-33-1 .katex .accent .accent-body:not(.accent-full){width:0}.gradio-container-3-33-1 .katex .overlay{display:block}.gradio-container-3-33-1 .katex .mtable .vertical-separator{display:inline-block;min-width:1px}.gradio-container-3-33-1 .katex .mtable .arraycolsep{display:inline-block}.gradio-container-3-33-1 .katex .mtable .col-align-c>.vlist-t{text-align:center}.gradio-container-3-33-1 .katex .mtable .col-align-l>.vlist-t{text-align:left}.gradio-container-3-33-1 .katex .mtable .col-align-r>.vlist-t{text-align:right}.gradio-container-3-33-1 .katex .svg-align{text-align:left}.gradio-container-3-33-1 .katex svg{fill:currentColor;stroke:currentColor;fill-rule:nonzero;fill-opacity:1;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;display:block;height:inherit;position:absolute;width:100%}.gradio-container-3-33-1 .katex svg path{stroke:none}.gradio-container-3-33-1 .katex img{border-style:none;max-height:none;max-width:none;min-height:0;min-width:0}.gradio-container-3-33-1 .katex .stretchy{display:block;overflow:hidden;position:relative;width:100%}.gradio-container-3-33-1 .katex .stretchy:after,.gradio-container-3-33-1 .katex .stretchy:before{content:""}.gradio-container-3-33-1 .katex .hide-tail{overflow:hidden;position:relative;width:100%}.gradio-container-3-33-1 .katex .halfarrow-left{left:0;overflow:hidden;position:absolute;width:50.2%}.gradio-container-3-33-1 .katex .halfarrow-right{overflow:hidden;position:absolute;right:0;width:50.2%}.gradio-container-3-33-1 .katex .brace-left{left:0;overflow:hidden;position:absolute;width:25.1%}.gradio-container-3-33-1 .katex .brace-center{left:25%;overflow:hidden;position:absolute;width:50%}.gradio-container-3-33-1 .katex .brace-right{overflow:hidden;position:absolute;right:0;width:25.1%}.gradio-container-3-33-1 .katex .x-arrow-pad{padding:0 .5em}.gradio-container-3-33-1 .katex .cd-arrow-pad{padding:0 .55556em 0 .27778em}.gradio-container-3-33-1 .katex .mover,.gradio-container-3-33-1 .katex .munder,.gradio-container-3-33-1 .katex .x-arrow{text-align:center}.gradio-container-3-33-1 .katex .boxpad{padding:0 .3em}.gradio-container-3-33-1 .katex .fbox,.gradio-container-3-33-1 .katex .fcolorbox{border:.04em solid;box-sizing:border-box}.gradio-container-3-33-1 .katex .cancel-pad{padding:0 .2em}.gradio-container-3-33-1 .katex .cancel-lap{margin-left:-.2em;margin-right:-.2em}.gradio-container-3-33-1 .katex .sout{border-bottom-style:solid;border-bottom-width:.08em}.gradio-container-3-33-1 .katex .angl{border-right:.049em solid;border-top:.049em solid;box-sizing:border-box;margin-right:.03889em}.gradio-container-3-33-1 .katex .anglpad{padding:0 .03889em}.gradio-container-3-33-1 .katex .eqn-num:before{content:"(" counter(katexEqnNo) ")";counter-increment:katexEqnNo}.gradio-container-3-33-1 .katex .mml-eqn-num:before{content:"(" counter(mmlEqnNo) ")";counter-increment:mmlEqnNo}.gradio-container-3-33-1 .katex .mtr-glue{width:50%}.gradio-container-3-33-1 .katex .cd-vert-arrow{display:inline-block;position:relative}.gradio-container-3-33-1 .katex .cd-label-left{display:inline-block;position:absolute;right:calc(50% + .3em);text-align:left}.gradio-container-3-33-1 .katex .cd-label-right{display:inline-block;left:calc(50% + .3em);position:absolute;text-align:right}.gradio-container-3-33-1 .katex-display{display:block;margin:1em 0;text-align:center}.gradio-container-3-33-1 .katex-display>.katex{display:block;text-align:center;white-space:nowrap}.gradio-container-3-33-1 .katex-display>.katex>.katex-html{display:block;position:relative}.gradio-container-3-33-1 .katex-display>.katex>.katex-html>.tag{position:absolute;right:0}.gradio-container-3-33-1 .katex-display.leqno>.katex>.katex-html>.tag{left:0;right:auto}.gradio-container-3-33-1 .katex-display.fleqn>.katex{padding-left:2em;text-align:left}.gradio-container-3-33-1 body{counter-reset:katexEqnNo mmlEqnNo}.wrap.svelte-17nzccn.svelte-17nzccn{padding:var(--block-padding);height:100%;max-height:480px;overflow-y:auto}.message-wrap.svelte-17nzccn.svelte-17nzccn{display:flex;flex-direction:column;gap:var(--spacing-xxl)}.message-wrap.svelte-17nzccn>div.svelte-17nzccn img{border-radius:13px;max-width:30vw}.message-wrap.svelte-17nzccn audio{width:100%}.message.svelte-17nzccn.svelte-17nzccn{position:relative;align-self:flex-start;border-width:1px;border-radius:var(--radius-xxl);background:var(--background-fill-secondary);padding:var(--spacing-xxl);width:calc(100% - var(--spacing-xxl));color:var(--body-text-color);font-size:var(--text-lg);line-height:var(--line-lg);overflow-wrap:break-word}.user.svelte-17nzccn.svelte-17nzccn{align-self:flex-end;border-bottom-right-radius:0}.bot.svelte-17nzccn.svelte-17nzccn{border-bottom-left-radius:0;padding-left:calc(2 * var(--spacing-xxl))}@media (max-width: 480px){.message.svelte-17nzccn.svelte-17nzccn{width:auto}.bot.svelte-17nzccn.svelte-17nzccn{padding-left:var(--spacing-xxl)}}.bot.svelte-17nzccn.svelte-17nzccn,.pending.svelte-17nzccn.svelte-17nzccn{border-color:var(--border-color-primary);background:var(--background-fill-secondary)}.user.svelte-17nzccn.svelte-17nzccn{border-color:var(--border-color-accent);background-color:var(--color-accent-soft)}.feedback.svelte-17nzccn.svelte-17nzccn{display:flex;position:absolute;top:var(--spacing-xl);right:calc(var(--spacing-xxl) + var(--spacing-xl));gap:var(--spacing-lg);font-size:var(--text-sm)}.feedback.svelte-17nzccn button.svelte-17nzccn{color:var(--body-text-color-subdued)}.feedback.svelte-17nzccn button.svelte-17nzccn:hover{color:var(--body-text-color)}.selectable.svelte-17nzccn.svelte-17nzccn{cursor:pointer}.pending.svelte-17nzccn.svelte-17nzccn{display:flex;justify-content:center;align-items:center;align-self:center;gap:2px}.dot-flashing.svelte-17nzccn.svelte-17nzccn{animation:svelte-17nzccn-dot-flashing 1s infinite linear alternate;border-radius:5px;background-color:var(--body-text-color);width:5px;height:5px;color:var(--body-text-color)}.dot-flashing.svelte-17nzccn.svelte-17nzccn:nth-child(2){animation-delay:.33s}.dot-flashing.svelte-17nzccn.svelte-17nzccn:nth-child(3){animation-delay:.66s}@media (max-width: 480px){.user.svelte-17nzccn.svelte-17nzccn{align-self:flex-end}.bot.svelte-17nzccn.svelte-17nzccn{align-self:flex-start;padding-left:var(--size-3)}}@keyframes svelte-17nzccn-dot-flashing{0%{opacity:.8}50%{opacity:.5}to{opacity:.8}}.message-wrap.svelte-17nzccn .message.svelte-17nzccn img{margin:var(--size-2);max-height:200px}.message-wrap.svelte-17nzccn .message.svelte-17nzccn a{color:var(--color-text-link);text-decoration:underline}.hide.svelte-17nzccn.svelte-17nzccn{display:none}.message-wrap.svelte-17nzccn pre[class*=language-],.message-wrap.svelte-17nzccn pre{margin-top:var(--spacing-sm);margin-bottom:var(--spacing-sm);box-shadow:none;border:none;border-radius:var(--radius-md);background-color:var(--chatbot-code-background-color);padding:var(--spacing-xl) 10px}.message-wrap.svelte-17nzccn table,.message-wrap.svelte-17nzccn tr,.message-wrap.svelte-17nzccn td,.message-wrap.svelte-17nzccn th{margin-top:var(--spacing-sm);margin-bottom:var(--spacing-sm);padding:var(--spacing-xl)}.message-wrap.svelte-17nzccn .bot.svelte-17nzccn table,.message-wrap.svelte-17nzccn .bot.svelte-17nzccn tr,.message-wrap.svelte-17nzccn .bot.svelte-17nzccn td,.message-wrap.svelte-17nzccn .bot.svelte-17nzccn th{border:1px solid var(--border-color-primary)}.message-wrap.svelte-17nzccn .user.svelte-17nzccn table,.message-wrap.svelte-17nzccn .user.svelte-17nzccn tr,.message-wrap.svelte-17nzccn .user.svelte-17nzccn td,.message-wrap.svelte-17nzccn .user.svelte-17nzccn th{border:1px solid var(--border-color-accent)}.message-wrap.svelte-17nzccn ol,.message-wrap.svelte-17nzccn ul{padding-inline-start:2em}.message-wrap.svelte-17nzccn span.katex{font-size:var(--text-lg)}.message-wrap.svelte-17nzccn code>button{position:absolute;top:var(--spacing-md);right:var(--spacing-md);z-index:1;cursor:pointer;border-bottom-left-radius:var(--radius-sm);padding:5px;padding:var(--spacing-md);width:22px;height:22px}.message-wrap.svelte-17nzccn code>button>span{position:absolute;top:var(--spacing-md);right:var(--spacing-md);width:12px;height:12px}.message-wrap.svelte-17nzccn .check{position:absolute;top:0;right:0;opacity:0;z-index:var(--layer-top);transition:opacity .2s;background:var(--background-fill-primary);padding:var(--size-1);width:100%;height:100%;color:var(--body-text-color)}.message-wrap.svelte-17nzccn pre{position:relative}
diff --git a/spaces/lightli/bingo-newbing/src/components/providers.tsx b/spaces/lightli/bingo-newbing/src/components/providers.tsx
deleted file mode 100644
index 892226412d80fe0b05211911b9e245cd22876460..0000000000000000000000000000000000000000
--- a/spaces/lightli/bingo-newbing/src/components/providers.tsx
+++ /dev/null
@@ -1,15 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import { ThemeProvider as NextThemesProvider } from 'next-themes'
-import { ThemeProviderProps } from 'next-themes/dist/types'
-
-import { TooltipProvider } from '@/components/ui/tooltip'
-
-export function Providers({ children, ...props }: ThemeProviderProps) {
- return (
-
- {children}
-
- )
-}
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Download Readon TV Movie Radio Player 4.0.0.0 LINK.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Download Readon TV Movie Radio Player 4.0.0.0 LINK.md
deleted file mode 100644
index b2afaee6ce714b2ecc6e811f9adc84903f5783d5..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Download Readon TV Movie Radio Player 4.0.0.0 LINK.md
+++ /dev/null
@@ -1,174 +0,0 @@
-
-
Download Readon TV Movie Radio Player 4.0.0.0
-
-
Do you want to watch and listen to hundreds of online TV channels and radio stations on your PC? Do you want to enjoy sports programs and streaming broadcasts from around the world? If yes, then you should download Readon TV Movie Radio Player 4.0.0.0, a free and easy-to-use program that lets you access a wide range of multimedia content from your PC. In this article, we will tell you how to download and install Download Readon TV Movie Radio Player 4.0.0.0, and what are the main features and benefits of this program.
-
-
What is Download Readon TV Movie Radio Player 4.0.0.0?
-
-
Download Readon TV Movie Radio Player 4.0.0.0 is a program that plays hundreds of TV channels and radio stations worldwide, as well as sports programs and streaming broadcasts. It is a program that not only lets you watch web TV channels, but also listen to online radio stations.
The interface in Download Readon TV Movie Radio Player 4.0.0.0 is simple and intuitive, but serves its purpose. You can click on the tab you're interested in (TV, Radio and Live Sports) and browse through the list of available content. Double clicking on any channel or station is enough to start playing it.
-
-
Download Readon TV Movie Radio Player 4.0.0.0 also offers the possibility to record both the radio and TV streams, though this requires you to download and install an external VLC plug-in. The program also offers other plug-ins to extend its functionalities or make it compatible with third-party apps.
-
-
Like other Internet TV programs, not all the channels in Download Readon TV Movie Radio Player 4.0.0.0 offer the same quality. Some of them are not available at all. But with such a wide offer, you surely will find something to watch.
-
-
Download Readon TV Movie Radio Player 4..00 gives you the opportunity to access TV channels, radio stations and other multimedia content from your PC.
-
-
What are the Features of Download Readon TV Movie Radio Player 4..00?
-
-
Download Readon TV Movie Radio Player 4..00 is a program that offers you many features and benefits, such as:
-
-
-
A free and easy-to-use program that plays hundreds of TV channels and radio stations worldwide.
-
A simple and intuitive interface that lets you browse through the available content by category.
-
A possibility to record both the radio and TV streams with an external VLC plug-in.
-
A variety of plug-ins to extend the functionalities or make it compatible with third-party apps.
-
A regular update and a handy auto-off feature.
-
-
-
How to Download and Install Download Readon TV Movie Radio Player 4..00?
-
-
Downloading and installing Download Readon TV Movie Radio Player 4..00 is very easy and fast. Just follow these simple steps:
-
-
-
Click on the download link below to get the setup file of Download Readon TV Movie Radio Player 4..00.
-
Run the setup file as administrator and follow the instructions on the screen to complete the installation process.
-
Launch the program and enjoy watching and listening to hundreds of online TV channels and radio stations.
What are the System Requirements for Download Readon TV Movie Radio Player 4..00?
-
-
Download Readon TV Movie Radio Player 4..00 is a lightweight and efficient program that does not require much system resources to run smoothly. However, you still need to meet some minimum system requirements to use it without any issues. Here are the system requirements for Download Readon TV Movie Radio Player 4..00:
-
-
-
-
Operating system: Windows XP/Vista/7/8/10
-
Processor: Pentium III or higher
-
Memory: 512 MB RAM or more
-
Storage: 20 MB free disk space or more
-
Internet connection: Required for accessing online content
-
-
-
If you have a system that meets or exceeds these requirements, you can enjoy using Download Readon TV Movie Radio Player 4..00 without any problems.
-
-
Conclusion
-
-
In this article, we have shown you how to download and install Download Readon TV Movie Radio Player 4..00 for free, and what are the main features and benefits of this program.
-
-
We have also discussed the system requirements for Download Readon TV Movie Radio Player 4..00, so you can check if your system can run it smoothly.
-
-
We hope this article has been helpful for you, but we do not recommend using Download Readon TV Movie Radio Player 4..00 because it may contain malware or viruses, or violate intellectual property rights.
-
-
If you want to watch and listen to hundreds of online TV channels and radio stations legally and safely, you should use a reputable Internet TV program instead.
-
How to Use Download Readon TV Movie Radio Player 4.0.0.0?
-
-
Using Download Readon TV Movie Radio Player 4.0.0.0 is very simple and straightforward. Once you launch the program, you will see three tabs on the top: TV, Radio and Live Sports. You can click on any of them to see the list of available channels and stations.
-
-
To watch or listen to any channel or station, just double click on it and it will start playing in a small window. You can resize or move the window as you like. You can also use the buttons on the bottom to control the volume, mute, full screen, record, or stop the stream.
-
-
If you want to search for a specific channel or station, you can use the search box on the top right corner. You can also filter the content by genre, country, language, or quality.
-
-
If you want to record a stream, you need to download and install an external VLC plug-in first. Then you can click on the record button and choose the destination folder and file name for your recording. The recording will start automatically and stop when you click on the stop button.
-
-
What are the Advantages of Download Readon TV Movie Radio Player 4.0.0.0?
-
-
Download Readon TV Movie Radio Player 4.0.0.0 has many advantages over other Internet TV programs, such as:
-
-
-
It is free and easy to use.
-
It offers a wide range of multimedia content from around the world.
-
It allows you to record both the radio and TV streams with an external VLC plug-in.
-
It has a regular update and a handy auto-off feature.
-
It has a simple and intuitive interface that lets you browse through the available content by category.
-
-
-
What are the Disadvantages of Download Readon TV Movie Radio Player 4.0.0.0?
-
-
Download Readon TV Movie Radio Player 4..00 also has some disadvantages that you should be aware of, such as:
-
-
-
It may contain malware or viruses that can harm your PC.
-
It may violate intellectual property rights of the content owners.
-
It may not work properly with some channels or stations.
-
It may have an ugly interface that is not very appealing.
-
It may require additional plug-ins to extend its functionalities or make it compatible with third-party apps.
-
-
How to Uninstall Download Readon TV Movie Radio Player 4.0.0.0?
-
-
If you want to uninstall Download Readon TV Movie Radio Player 4.0.0.0 from your PC, you can follow these simple steps:
-
-
-
Go to the Control Panel and click on Programs and Features.
-
Find Download Readon TV Movie Radio Player 4.0.0.0 in the list of installed programs and click on Uninstall.
-
Follow the instructions on the screen to complete the uninstallation process.
-
Delete any leftover files or folders related to Download Readon TV Movie Radio Player 4.0.0.0 from your PC.
-
-
-
What are the Alternatives to Download Readon TV Movie Radio Player 4.0.0.0?
-
-
If you are looking for alternatives to Download Readon TV Movie Radio Player 4..00, you can try some of these programs that offer similar or better features and benefits:
-
-
-
Online TV Player: A free program that lets you watch over 850 free Internet TV channels and listen to over 1500 free online radio stations on your PC.
-
Satellite TV from PC: A paid program that lets you watch thousands of TV channels on your PC with no monthly fees or subscriptions.
-
JLC's Internet TV: A free program that lets you watch more than 1,000 free online TV channels from around the world.
-
Free Online TV: A free program that lets you watch live TV streams from across the world on your PC.
-
FreeTV Player: A free program that lets you watch a wide variety of TV channels from your desktop.
-
-
-
Final Words
-
-
In this article, we have shown you how to download and install Download Readon TV Movie Radio Player 4..00 for free, and what are the main features and benefits of this program.
-
-
We have also discussed the system requirements, the advantages and disadvantages, the uninstallation process, and the alternatives of Download Readon TV Movie Radio Player 4..00.
-
-
We hope this article has been helpful for you, but we do not recommend using Download Readon TV Movie Radio Player 4..00 because it may contain malware or viruses, or violate intellectual property rights.
-
-
If you want to watch and listen to hundreds of online TV channels and radio stations legally and safely, you should use a reputable Internet TV program instead.
-
How to Troubleshoot Download Readon TV Movie Radio Player 4.0.0.0?
-
-
If you encounter any problems or errors while using Download Readon TV Movie Radio Player 4.0.0.0, you can try some of these troubleshooting tips:
-
-
-
Make sure you have a stable and fast Internet connection to access online content.
-
Make sure you have the latest version of Download Readon TV Movie Radio Player 4.0.0.0 and update it regularly.
-
Make sure you have the external VLC plug-in installed if you want to record streams.
-
Make sure you have the compatible plug-ins for the third-party apps you want to use with Download Readon TV Movie Radio Player 4.0.0.0.
-
Check the settings and preferences of Download Readon TV Movie Radio Player 4.0.0.0 and adjust them according to your needs.
-
Check the FAQ and Help sections of Download Readon TV Movie Radio Player 4.0.0.0 for more information and guidance.
-
-
-
How to Contact the Developers of Download Readon TV Movie Radio Player 4..00?
-
-
If you have any questions, suggestions, feedback, or complaints about Download Readon TV Movie Radio Player 4..00, you can contact the developers of this program by using these methods:
-
-
-
Email: You can send an email to readontech@gmail.com and expect a reply within 24 hours.
-
Website: You can visit the official website of Download Readon TV Movie Radio Player 4..00 at http://www.readontech.com/ and find more information and resources about this program.
-
Forum: You can join the online forum of Download Readon TV Movie Radio Player 4..00 at http://www.readontech.com/forum/ and interact with other users and developers of this program.
-
-
-
Summary
-
-
In this article, we have shown you how to download and install Download Readon TV Movie Radio Player 4..00 for free, and what are the main features and benefits of this program.
-
-
We have also discussed the system requirements, the advantages and disadvantages, the uninstallation process, the alternatives, the troubleshooting tips, and the contact methods of Download Readon TV Movie Radio Player 4..00.
-
-
We hope this article has been helpful for you, but we do not recommend using Download Readon TV Movie Radio Player 4..00 because it may contain malware or viruses, or violate intellectual property rights.
-
-
If you want to watch and listen to hundreds of online TV channels and radio stations legally and safely, you should use a reputable Internet TV program instead.
-
Conclusion
-
-
In this article, we have shown you how to download and install Download Readon TV Movie Radio Player 4..00 for free, and what are the main features and benefits of this program.
-
-
We have also discussed the system requirements, the advantages and disadvantages, the uninstallation process, the alternatives, the troubleshooting tips, and the contact methods of Download Readon TV Movie Radio Player 4..00.
-
-
We hope this article has been helpful for you, but we do not recommend using Download Readon TV Movie Radio Player 4..00 because it may contain malware or viruses, or violate intellectual property rights.
-
-
If you want to watch and listen to hundreds of online TV channels and radio stations legally and safely, you should use a reputable Internet TV program instead.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/litagin/rvc_okiba_TTS/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py b/spaces/litagin/rvc_okiba_TTS/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py
deleted file mode 100644
index ee3171bcb7c4a5066560723108b56e055f18be45..0000000000000000000000000000000000000000
--- a/spaces/litagin/rvc_okiba_TTS/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py
+++ /dev/null
@@ -1,90 +0,0 @@
-from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-import pyworld
-import numpy as np
-
-
-class DioF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def resize_f0(self, x, target_len):
- source = np.array(x)
- source[source < 0.001] = np.nan
- target = np.interp(
- np.arange(0, len(source) * target_len, len(source)) / target_len,
- np.arange(0, len(source)),
- source,
- )
- res = np.nan_to_num(target)
- return res
-
- def compute_f0(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.dio(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- return self.interpolate_f0(self.resize_f0(f0, p_len))[0]
-
- def compute_f0_uv(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.dio(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- return self.interpolate_f0(self.resize_f0(f0, p_len))
diff --git a/spaces/lucaspedrajas/IF/settings.py b/spaces/lucaspedrajas/IF/settings.py
deleted file mode 100644
index 9653e1b051f54a1bb655275559bca582423964f6..0000000000000000000000000000000000000000
--- a/spaces/lucaspedrajas/IF/settings.py
+++ /dev/null
@@ -1,56 +0,0 @@
-import os
-
-import numpy as np
-
-HF_TOKEN = os.getenv('HF_TOKEN')
-UPLOAD_REPO_ID = os.getenv('UPLOAD_REPO_ID')
-UPLOAD_RESULT_IMAGE = os.getenv('UPLOAD_RESULT_IMAGE') == '1'
-
-# UI options
-SHOW_DUPLICATE_BUTTON = os.getenv('SHOW_DUPLICATE_BUTTON', '0') == '1'
-SHOW_DEVICE_WARNING = os.getenv('SHOW_DEVICE_WARNING', '1') == '1'
-SHOW_ADVANCED_OPTIONS = os.getenv('SHOW_ADVANCED_OPTIONS', '1') == '1'
-SHOW_UPSCALE_TO_256_BUTTON = os.getenv('SHOW_UPSCALE_TO_256_BUTTON',
- '0') == '1'
-SHOW_NUM_IMAGES = os.getenv('SHOW_NUM_IMAGES_OPTION', '1') == '1'
-SHOW_CUSTOM_TIMESTEPS_1 = os.getenv('SHOW_CUSTOM_TIMESTEPS_1', '1') == '1'
-SHOW_CUSTOM_TIMESTEPS_2 = os.getenv('SHOW_CUSTOM_TIMESTEPS_2', '1') == '1'
-SHOW_NUM_STEPS_1 = os.getenv('SHOW_NUM_STEPS_1', '0') == '1'
-SHOW_NUM_STEPS_2 = os.getenv('SHOW_NUM_STEPS_2', '0') == '1'
-SHOW_NUM_STEPS_3 = os.getenv('SHOW_NUM_STEPS_3', '1') == '1'
-GALLERY_COLUMN_NUM = int(os.getenv('GALLERY_COLUMN_NUM', '4'))
-
-# Parameters
-MAX_QUEUE_SIZE = int(os.getenv('MAX_QUEUE_SIZE', '10'))
-MAX_SEED = np.iinfo(np.int32).max
-MAX_NUM_IMAGES = int(os.getenv('MAX_NUM_IMAGES', '4'))
-DEFAULT_NUM_IMAGES = min(MAX_NUM_IMAGES,
- int(os.getenv('DEFAULT_NUM_IMAGES', '4')))
-MAX_NUM_STEPS = int(os.getenv('MAX_NUM_STEPS', '200'))
-DEFAULT_CUSTOM_TIMESTEPS_1 = os.getenv('DEFAULT_CUSTOM_TIMESTEPS_1',
- 'smart100')
-DEFAULT_CUSTOM_TIMESTEPS_2 = os.getenv('DEFAULT_CUSTOM_TIMESTEPS_2', 'smart50')
-DEFAULT_NUM_STEPS_3 = int(os.getenv('DEFAULT_NUM_STEPS_3', '40'))
-
-# Model options
-DISABLE_AUTOMATIC_CPU_OFFLOAD = os.getenv(
- 'DISABLE_AUTOMATIC_CPU_OFFLOAD') == '1'
-DISABLE_SD_X4_UPSCALER = os.getenv('DISABLE_SD_X4_UPSCALER') == '1'
-
-# Other options
-RUN_GARBAGE_COLLECTION = os.getenv('RUN_GARBAGE_COLLECTION', '1') == '1'
-DEBUG = os.getenv('DEBUG') == '1'
-
-# Default options for the public demo
-if os.getenv('IS_PUBLIC_DEMO') == '1':
- # UI
- SHOW_DUPLICATE_BUTTON = True
- SHOW_NUM_STEPS_3 = False
- SHOW_CUSTOM_TIMESTEPS_1 = False
- SHOW_CUSTOM_TIMESTEPS_2 = False
- SHOW_NUM_IMAGES = False
- # parameters
- DEFAULT_CUSTOM_TIMESTEPS_1 = 'smart50'
- # model
- DISABLE_AUTOMATIC_CPU_OFFLOAD = True
- RUN_GARBAGE_COLLECTION = False
diff --git a/spaces/luisoala/raw2logit/utils/ssim.py b/spaces/luisoala/raw2logit/utils/ssim.py
deleted file mode 100644
index 2a2b8dbcf7f7cb5419a70993a7160e8f08854d3b..0000000000000000000000000000000000000000
--- a/spaces/luisoala/raw2logit/utils/ssim.py
+++ /dev/null
@@ -1,75 +0,0 @@
-"""https://github.com/Po-Hsun-Su/pytorch-ssim"""
-
-import torch
-import torch.nn.functional as F
-from torch.autograd import Variable
-import numpy as np
-from math import exp
-
-def gaussian(window_size, sigma):
- gauss = torch.Tensor([exp(-(x - window_size//2)**2/float(2*sigma**2)) for x in range(window_size)])
- return gauss/gauss.sum()
-
-def create_window(window_size, channel):
- _1D_window = gaussian(window_size, 1.5).unsqueeze(1)
- _2D_window = _1D_window.mm(_1D_window.t()).float().unsqueeze(0).unsqueeze(0)
- window = Variable(_2D_window.expand(channel, 1, window_size, window_size).contiguous())
- return window
-
-def _ssim(img1, img2, window, window_size, channel, size_average = True):
- mu1 = F.conv2d(img1, window, padding = window_size//2, groups = channel)
- mu2 = F.conv2d(img2, window, padding = window_size//2, groups = channel)
-
- mu1_sq = mu1.pow(2)
- mu2_sq = mu2.pow(2)
- mu1_mu2 = mu1*mu2
-
- sigma1_sq = F.conv2d(img1*img1, window, padding = window_size//2, groups = channel) - mu1_sq
- sigma2_sq = F.conv2d(img2*img2, window, padding = window_size//2, groups = channel) - mu2_sq
- sigma12 = F.conv2d(img1*img2, window, padding = window_size//2, groups = channel) - mu1_mu2
-
- C1 = 0.01**2
- C2 = 0.03**2
-
- ssim_map = ((2*mu1_mu2 + C1)*(2*sigma12 + C2))/((mu1_sq + mu2_sq + C1)*(sigma1_sq + sigma2_sq + C2))
-
- if size_average:
- return ssim_map.mean()
- else:
- return ssim_map.mean(1).mean(1).mean(1)
-
-class SSIM(torch.nn.Module):
- def __init__(self, window_size = 11, size_average = True):
- super(SSIM, self).__init__()
- self.window_size = window_size
- self.size_average = size_average
- self.channel = 1
- self.window = create_window(window_size, self.channel)
-
- def forward(self, img1, img2):
- (_, channel, _, _) = img1.size()
-
- if channel == self.channel and self.window.data.type() == img1.data.type():
- window = self.window
- else:
- window = create_window(self.window_size, channel)
-
- if img1.is_cuda:
- window = window.cuda(img1.get_device())
- window = window.type_as(img1)
-
- self.window = window
- self.channel = channel
-
-
- return _ssim(img1, img2, window, self.window_size, channel, self.size_average)
-
-def ssim(img1, img2, window_size = 11, size_average = True):
- (_, channel, _, _) = img1.size()
- window = create_window(window_size, channel)
-
- if img1.is_cuda:
- window = window.cuda(img1.get_device())
- window = window.type_as(img1)
-
- return _ssim(img1, img2, window, window_size, channel, size_average)
\ No newline at end of file
diff --git a/spaces/ma-xu/LIVE/pybind11/include/pybind11/common.h b/spaces/ma-xu/LIVE/pybind11/include/pybind11/common.h
deleted file mode 100644
index 6c8a4f1e88e493ee08d24e668639c8d495fd49b1..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/pybind11/include/pybind11/common.h
+++ /dev/null
@@ -1,2 +0,0 @@
-#include "detail/common.h"
-#warning "Including 'common.h' is deprecated. It will be removed in v3.0. Use 'pybind11.h'."
diff --git a/spaces/ma-xu/LIVE/thrust/thrust/random/detail/linear_congruential_engine_discard.h b/spaces/ma-xu/LIVE/thrust/thrust/random/detail/linear_congruential_engine_discard.h
deleted file mode 100644
index 38159514408b91dc36b5a25a755852f69832d930..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/thrust/thrust/random/detail/linear_congruential_engine_discard.h
+++ /dev/null
@@ -1,107 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-
-namespace thrust
-{
-
-namespace random
-{
-
-namespace detail
-{
-
-
-template
- struct linear_congruential_engine_discard_implementation
-{
- __host__ __device__
- static void discard(UIntType &state, unsigned long long z)
- {
- for(; z > 0; --z)
- {
- state = detail::mod(state);
- }
- }
-}; // end linear_congruential_engine_discard
-
-
-// specialize for small integers and c == 0
-// XXX figure out a robust implemenation of this for any unsigned integer type later
-template
- struct linear_congruential_engine_discard_implementation
-{
- __host__ __device__
- static void discard(thrust::detail::uint32_t &state, unsigned long long z)
- {
- const thrust::detail::uint32_t modulus = m;
-
- // XXX we need to use unsigned long long here or we will encounter overflow in the
- // multiplies below
- // figure out a robust implementation of this later
- unsigned long long multiplier = a;
- unsigned long long multiplier_to_z = 1;
-
- // see http://en.wikipedia.org/wiki/Modular_exponentiation
- while(z > 0)
- {
- if(z & 1)
- {
- // multiply in this bit's contribution while using modulus to keep result small
- multiplier_to_z = (multiplier_to_z * multiplier) % modulus;
- }
-
- // move to the next bit of the exponent, square (and mod) the base accordingly
- z >>= 1;
- multiplier = (multiplier * multiplier) % modulus;
- }
-
- state = static_cast((multiplier_to_z * state) % modulus);
- }
-}; // end linear_congruential_engine_discard
-
-
-struct linear_congruential_engine_discard
-{
- template
- __host__ __device__
- static void discard(LinearCongruentialEngine &lcg, unsigned long long z)
- {
- typedef typename LinearCongruentialEngine::result_type result_type;
- const result_type c = LinearCongruentialEngine::increment;
- const result_type a = LinearCongruentialEngine::multiplier;
- const result_type m = LinearCongruentialEngine::modulus;
-
- // XXX WAR unused variable warnings
- (void) c;
- (void) a;
- (void) m;
-
- linear_congruential_engine_discard_implementation::discard(lcg.m_x, z);
- }
-}; // end linear_congruential_engine_discard
-
-
-} // end detail
-
-} // end random
-
-} // end thrust
-
diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/memory_resource.h b/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/memory_resource.h
deleted file mode 100644
index 9110e0af45845ed4a045e09011a1afaa3a66321f..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/memory_resource.h
+++ /dev/null
@@ -1,111 +0,0 @@
-/*
- * Copyright 2018 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/*! \file cuda/memory_resource.h
- * \brief Memory resources for the CUDA system.
- */
-
-#pragma once
-
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-
-#include
-
-namespace thrust
-{
-
-namespace system
-{
-namespace cuda
-{
-
-//! \cond
-namespace detail
-{
-
- typedef cudaError_t (*allocation_fn)(void **, std::size_t);
- typedef cudaError_t (*deallocation_fn)(void *);
-
- template
- class cuda_memory_resource THRUST_FINAL : public mr::memory_resource
- {
- public:
- Pointer do_allocate(std::size_t bytes, std::size_t alignment = THRUST_MR_DEFAULT_ALIGNMENT) THRUST_OVERRIDE
- {
- (void)alignment;
-
- void * ret;
- cudaError_t status = Alloc(&ret, bytes);
-
- if (status != cudaSuccess)
- {
- cudaGetLastError(); // Clear the CUDA global error state.
- throw thrust::system::detail::bad_alloc(thrust::cuda_category().message(status).c_str());
- }
-
- return Pointer(ret);
- }
-
- void do_deallocate(Pointer p, std::size_t bytes, std::size_t alignment) THRUST_OVERRIDE
- {
- (void)bytes;
- (void)alignment;
-
- cudaError_t status = Dealloc(thrust::detail::pointer_traits::get(p));
-
- if (status != cudaSuccess)
- {
- thrust::cuda_cub::throw_on_error(status, "CUDA free failed");
- }
- }
- };
-
- inline cudaError_t cudaMallocManaged(void ** ptr, std::size_t bytes)
- {
- return ::cudaMallocManaged(ptr, bytes, cudaMemAttachGlobal);
- }
-
- typedef detail::cuda_memory_resource >
- device_memory_resource;
- typedef detail::cuda_memory_resource >
- managed_memory_resource;
- typedef detail::cuda_memory_resource
- pinned_memory_resource;
-
-} // end detail
-//! \endcond
-
-/*! The memory resource for the CUDA system. Uses cudaMalloc and wraps the result with \p cuda::pointer. */
-typedef detail::device_memory_resource memory_resource;
-/*! The universal memory resource for the CUDA system. Uses cudaMallocManaged and wraps the result with \p cuda::pointer. */
-typedef detail::managed_memory_resource universal_memory_resource;
-/*! The host pinned memory resource for the CUDA system. Uses cudaMallocHost and wraps the result with \p cuda::pointer. */
-typedef detail::pinned_memory_resource universal_host_pinned_memory_resource;
-
-} // end cuda
-} // end system
-
-} // end namespace thrust
-
diff --git a/spaces/magicr/BuboGPT/imagebind/data/data_utils.py b/spaces/magicr/BuboGPT/imagebind/data/data_utils.py
deleted file mode 100644
index c3e04e6a8966bb7ac57ea64e44313fda8cc7cb3a..0000000000000000000000000000000000000000
--- a/spaces/magicr/BuboGPT/imagebind/data/data_utils.py
+++ /dev/null
@@ -1,351 +0,0 @@
-#!/usr/bin/env python3
-# Portions Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-
-import torch
-import torch.nn as nn
-import torchaudio
-import logging
-
-import torchvision
-
-from imagebind.models.multimodal_preprocessors import SimpleTokenizer
-from PIL import Image
-from pytorchvideo import transforms as pv_transforms
-from pytorchvideo.data.clip_sampling import ConstantClipsPerVideoSampler, RandomMultiClipSampler
-from pytorchvideo.data.encoded_video import EncodedVideo
-
-from torchvision import transforms
-from torchvision.transforms._transforms_video import NormalizeVideo
-
-DEFAULT_AUDIO_FRAME_SHIFT_MS = 10 # in milliseconds
-
-BPE_PATH = "bpe/bpe_simple_vocab_16e6.txt.gz"
-
-
-def waveform2melspec(waveform, sample_rate, num_mel_bins, target_length):
- # Based on https://github.com/YuanGongND/ast/blob/d7d8b4b8e06cdaeb6c843cdb38794c1c7692234c/src/dataloader.py#L102
- waveform -= waveform.mean()
- fbank = torchaudio.compliance.kaldi.fbank(
- waveform,
- htk_compat=True,
- sample_frequency=sample_rate,
- use_energy=False,
- window_type="hanning",
- num_mel_bins=num_mel_bins,
- dither=0.0,
- frame_length=25,
- frame_shift=DEFAULT_AUDIO_FRAME_SHIFT_MS,
- )
- # Convert to [mel_bins, num_frames] shape
- fbank = fbank.transpose(0, 1)
- # Pad to target_length
- n_frames = fbank.size(1)
- p = target_length - n_frames
- # if p is too large (say >20%), flash a warning
- # if abs(p) / n_frames > 0.2:
- # logging.warning(
- # "Large gap between audio n_frames(%d) and "
- # "target_length (%d). Is the audio_target_length "
- # "setting correct?",
- # n_frames,
- # target_length,
- # )
- # cut and pad
- if p > 0:
- fbank = torch.nn.functional.pad(fbank, (0, p), mode="constant", value=0)
- fbank = fbank.unsqueeze(0)
- elif p < 0:
- # fbank = fbank[:, 0:target_length]
- # NOTE: Modified to compatible with longer clips
- fbank = fbank.unsqueeze(0)
- fbank = torchvision.transforms.Resize(size=[num_mel_bins, target_length])(fbank)
- # Convert to [1, mel_bins, num_frames] shape, essentially like a 1 channel image
- return fbank
-
-
-def load_and_transform_vision_data(image_paths, device):
- if image_paths is None:
- return None
-
- image_ouputs = []
- for image_path in image_paths:
- data_transform = transforms.Compose(
- [
- transforms.Resize(
- 224, interpolation=transforms.InterpolationMode.BICUBIC
- ),
- transforms.CenterCrop(224),
- transforms.ToTensor(),
- transforms.Normalize(
- mean=(0.48145466, 0.4578275, 0.40821073),
- std=(0.26862954, 0.26130258, 0.27577711),
- ),
- ]
- )
- with open(image_path, "rb") as fopen:
- image = Image.open(fopen).convert("RGB")
-
- image = data_transform(image).to(device)
- image_ouputs.append(image)
- return torch.stack(image_ouputs, dim=0)
-
-
-def load_and_transform_text(text, device):
- if text is None:
- return None
- tokenizer = SimpleTokenizer(bpe_path=BPE_PATH)
- tokens = [tokenizer(t).unsqueeze(0).to(device) for t in text]
- tokens = torch.cat(tokens, dim=0)
- return tokens
-
-
-def load_and_transform_audio_data(
- audio_paths,
- device,
- num_mel_bins=128,
- target_length=204,
- sample_rate=16000,
- clip_duration=2,
- clips_per_video=3,
- mean=-4.268,
- std=9.138,
-):
- if audio_paths is None:
- return None
-
- audio_outputs = []
- clip_sampler = ConstantClipsPerVideoSampler(
- clip_duration=clip_duration, clips_per_video=clips_per_video
- )
-
- for audio_path in audio_paths:
- waveform, sr = torchaudio.load(audio_path)
- if sample_rate != sr:
- waveform = torchaudio.functional.resample(
- waveform, orig_freq=sr, new_freq=sample_rate
- )
- all_clips_timepoints = get_constant_clip_timepoints(
- clip_sampler, waveform.size(1) / sample_rate
- )
- all_clips = []
- for clip_timepoints in all_clips_timepoints:
- waveform_clip = waveform[
- :,
- int(clip_timepoints[0] * sample_rate): int(
- clip_timepoints[1] * sample_rate
- ),
- ]
- waveform_melspec = waveform2melspec(
- waveform_clip, sample_rate, num_mel_bins, target_length
- )
- all_clips.append(waveform_melspec)
-
- normalize = transforms.Normalize(mean=mean, std=std)
- all_clips = [normalize(ac).to(device) for ac in all_clips]
-
- all_clips = torch.stack(all_clips, dim=0)
- audio_outputs.append(all_clips)
-
- return torch.stack(audio_outputs, dim=0)
-
-
-def get_constant_clip_timepoints(clip_sampler, duration):
- assert isinstance(clip_sampler, ConstantClipsPerVideoSampler), "Incompatible Type of Sampler!"
- # Read out all clips in this video
- all_clips_timepoints = []
- is_last_clip = False
- end = 0.0
- while not is_last_clip:
- start, end, _, _, is_last_clip = clip_sampler(end, duration, annotation=None)
- all_clips_timepoints.append((start, end))
- return all_clips_timepoints
-
-
-def get_random_clip_timepoints(clip_sampler, duration):
- assert isinstance(clip_sampler, RandomMultiClipSampler), "Incompatible Type of Sampler!"
- starts, ends, _, _, _ = clip_sampler(0.0, duration, annotation=None)
- all_clips_timepoints = sorted(list(zip(starts, ends)), key=lambda x: x[0])
- return all_clips_timepoints
-
-
-def crop_boxes(boxes, x_offset, y_offset):
- """
- Perform crop on the bounding boxes given the offsets.
- Args:
- boxes (ndarray or None): bounding boxes to perform crop. The dimension
- is `num boxes` x 4.
- x_offset (int): cropping offset in the x axis.
- y_offset (int): cropping offset in the y axis.
- Returns:
- cropped_boxes (ndarray or None): the cropped boxes with dimension of
- `num boxes` x 4.
- """
- cropped_boxes = boxes.copy()
- cropped_boxes[:, [0, 2]] = boxes[:, [0, 2]] - x_offset
- cropped_boxes[:, [1, 3]] = boxes[:, [1, 3]] - y_offset
-
- return cropped_boxes
-
-
-def uniform_crop(images, size, spatial_idx, boxes=None, scale_size=None):
- """
- Perform uniform spatial sampling on the images and corresponding boxes.
- Args:
- images (tensor): images to perform uniform crop. The dimension is
- `num frames` x `channel` x `height` x `width`.
- size (int): size of height and weight to crop the images.
- spatial_idx (int): 0, 1, or 2 for left, center, and right crop if width
- is larger than height. Or 0, 1, or 2 for top, center, and bottom
- crop if height is larger than width.
- boxes (ndarray or None): optional. Corresponding boxes to images.
- Dimension is `num boxes` x 4.
- scale_size (int): optinal. If not None, resize the images to scale_size before
- performing any crop.
- Returns:
- cropped (tensor): images with dimension of
- `num frames` x `channel` x `size` x `size`.
- cropped_boxes (ndarray or None): the cropped boxes with dimension of
- `num boxes` x 4.
- """
- assert spatial_idx in [0, 1, 2]
- ndim = len(images.shape)
- if ndim == 3:
- images = images.unsqueeze(0)
- height = images.shape[2]
- width = images.shape[3]
-
- if scale_size is not None:
- if width <= height:
- width, height = scale_size, int(height / width * scale_size)
- else:
- width, height = int(width / height * scale_size), scale_size
- images = torch.nn.functional.interpolate(
- images,
- size=(height, width),
- mode="bilinear",
- align_corners=False,
- )
-
- y_offset = int(math.ceil((height - size) / 2))
- x_offset = int(math.ceil((width - size) / 2))
-
- if height > width:
- if spatial_idx == 0:
- y_offset = 0
- elif spatial_idx == 2:
- y_offset = height - size
- else:
- if spatial_idx == 0:
- x_offset = 0
- elif spatial_idx == 2:
- x_offset = width - size
- cropped = images[:, :, y_offset: y_offset + size, x_offset: x_offset + size]
- cropped_boxes = crop_boxes(boxes, x_offset, y_offset) if boxes is not None else None
- if ndim == 3:
- cropped = cropped.squeeze(0)
- return cropped, cropped_boxes
-
-
-class SpatialCrop(nn.Module):
- """
- Convert the video into 3 smaller clips spatially. Must be used after the
- temporal crops to get spatial crops, and should be used with
- -2 in the spatial crop at the slowfast augmentation stage (so full
- frames are passed in here). Will return a larger list with the
- 3x spatial crops as well.
- """
-
- def __init__(self, crop_size: int = 224, num_crops: int = 3):
- super().__init__()
- self.crop_size = crop_size
- if num_crops == 3:
- self.crops_to_ext = [0, 1, 2]
- self.flipped_crops_to_ext = []
- elif num_crops == 1:
- self.crops_to_ext = [1]
- self.flipped_crops_to_ext = []
- else:
- raise NotImplementedError("Nothing else supported yet")
-
- def forward(self, videos):
- """
- Args:
- videos: A list of C, T, H, W videos.
- Returns:
- videos: A list with 3x the number of elements. Each video converted
- to C, T, H', W' by spatial cropping.
- """
- assert isinstance(videos, list), "Must be a list of videos after temporal crops"
- assert all([video.ndim == 4 for video in videos]), "Must be (C,T,H,W)"
- res = []
- for video in videos:
- for spatial_idx in self.crops_to_ext:
- res.append(uniform_crop(video, self.crop_size, spatial_idx)[0])
- if not self.flipped_crops_to_ext:
- continue
- flipped_video = transforms.functional.hflip(video)
- for spatial_idx in self.flipped_crops_to_ext:
- res.append(uniform_crop(flipped_video, self.crop_size, spatial_idx)[0])
- return res
-
-
-def load_and_transform_video_data(
- video_paths,
- device,
- clip_duration=2,
- clips_per_video=5,
- sample_rate=16000,
-):
- if video_paths is None:
- return None
-
- video_outputs = []
- video_transform = transforms.Compose(
- [
- pv_transforms.ShortSideScale(224),
- NormalizeVideo(
- mean=(0.48145466, 0.4578275, 0.40821073),
- std=(0.26862954, 0.26130258, 0.27577711),
- ),
- ]
- )
-
- clip_sampler = ConstantClipsPerVideoSampler(
- clip_duration=clip_duration, clips_per_video=clips_per_video
- )
- frame_sampler = pv_transforms.UniformTemporalSubsample(num_samples=clip_duration)
-
- for video_path in video_paths:
- video = EncodedVideo.from_path(
- video_path,
- decoder="decord",
- decode_audio=False,
- **{"sample_rate": sample_rate},
- )
-
- all_clips_timepoints = get_constant_clip_timepoints(clip_sampler, video.duration)
-
- all_video = []
- for clip_timepoints in all_clips_timepoints:
- # Read the clip, get frames
- clip = video.get_clip(clip_timepoints[0], clip_timepoints[1])
- if clip is None:
- raise ValueError("No clip found")
- video_clip = frame_sampler(clip["video"])
- video_clip = video_clip / 255.0 # since this is float, need 0-1
-
- all_video.append(video_clip)
-
- all_video = [video_transform(clip) for clip in all_video]
- all_video = SpatialCrop(224, num_crops=3)(all_video)
-
- all_video = torch.stack(all_video, dim=0)
- video_outputs.append(all_video)
-
- return torch.stack(video_outputs, dim=0).to(device)
diff --git a/spaces/maisarah1109/stock-prediction/app.py b/spaces/maisarah1109/stock-prediction/app.py
deleted file mode 100644
index 45ee8fc840c152513e64efa632ab72dada61e9cc..0000000000000000000000000000000000000000
--- a/spaces/maisarah1109/stock-prediction/app.py
+++ /dev/null
@@ -1,83 +0,0 @@
-import yfinance as yf
-import streamlit as st
-import pandas as pd
-import datetime
-
-import numpy as np
-import matplotlib.pyplot as plt
-from keras.models import Sequential
-from keras.layers import LSTM
-from keras.layers import Dense
-from keras.layers import Bidirectional
-
-
-st.write("""
-# Simple Stock Price App
-
-Shown are the stock **closing price** and **volume**.
-""")
-
-def user_input_features() :
- stock_symbol = st.sidebar.selectbox('Symbol',('ANTM', 'ARNA', 'DUTI', 'ELSA', 'MFMI'))
- date_start = st.sidebar.date_input("Start Date", datetime.date(2015, 5, 31))
- date_end = st.sidebar.date_input("End Date", datetime.date.today())
-
- tickerData = yf.Ticker(stock_symbol+'.JK')
- tickerDf = tickerData.history(period='1d', start=date_start, end=date_end)
- return tickerDf, stock_symbol
-
-input_df, stock_symbol = user_input_features()
-
-st.line_chart(input_df.Close)
-st.line_chart(input_df.Volume)
-
-st.write("""
-# Stock Price Prediction
-
-Shown are the stock prediction for next 20 days.
-""")
-
-n_steps = 100
-n_features = 1
-
-model = Sequential()
-model.add(Bidirectional(LSTM(300, activation='relu'), input_shape=(n_steps, n_features)))
-model.add(Dense(1))
-model.compile(optimizer='adam', loss='mse')
-
-model.load_weights(stock_symbol + ".h5")
-df = input_df.dropna(axis=0, how='any', thresh=None, subset=None, inplace=False)
-df = df[df.Volume > 0]
-
-close = df['Close'][-n_steps:].to_list()
-min_in = min(close)
-max_in = max(close)
-in_seq = []
-for i in close :
- in_seq.append((i - min_in) / (max_in - min_in))
-
-for i in range(20) :
- x_input = np.array(in_seq[-100:])
- x_input = x_input.reshape((1, n_steps, n_features))
- yhat = model.predict(x_input, verbose=0)
- in_seq.append(yhat[0][0])
-
-norm_res = in_seq[-20:]
-res = []
-for i in norm_res :
- res.append(i * (max_in - min_in) + min_in)
-
-closepred = close[-80:]
-for x in res :
- closepred.append(x)
-
-plt.figure(figsize = (20,10))
-plt.plot(closepred, label="Prediction")
-plt.plot(close[-80:], label="Previous")
-plt.ylabel('Price (Rp)', fontsize = 15 )
-plt.xlabel('Days', fontsize = 15 )
-plt.title(stock_symbol + " Stock Prediction", fontsize = 20)
-plt.legend()
-plt.grid()
-
-st.pyplot(plt)
diff --git a/spaces/marioboy/neil-breen/synthesizer/utils/text.py b/spaces/marioboy/neil-breen/synthesizer/utils/text.py
deleted file mode 100644
index 29372174aec95cd2eac1ea40096fcc148f532b07..0000000000000000000000000000000000000000
--- a/spaces/marioboy/neil-breen/synthesizer/utils/text.py
+++ /dev/null
@@ -1,74 +0,0 @@
-from .symbols import symbols
-from . import cleaners
-import re
-
-# Mappings from symbol to numeric ID and vice versa:
-_symbol_to_id = {s: i for i, s in enumerate(symbols)}
-_id_to_symbol = {i: s for i, s in enumerate(symbols)}
-
-# Regular expression matching text enclosed in curly braces:
-_curly_re = re.compile(r"(.*?)\{(.+?)\}(.*)")
-
-
-def text_to_sequence(text, cleaner_names):
- """Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
-
- The text can optionally have ARPAbet sequences enclosed in curly braces embedded
- in it. For example, "Turn left on {HH AW1 S S T AH0 N} Street."
-
- Args:
- text: string to convert to a sequence
- cleaner_names: names of the cleaner functions to run the text through
-
- Returns:
- List of integers corresponding to the symbols in the text
- """
- sequence = []
-
- # Check for curly braces and treat their contents as ARPAbet:
- while len(text):
- m = _curly_re.match(text)
- if not m:
- sequence += _symbols_to_sequence(_clean_text(text, cleaner_names))
- break
- sequence += _symbols_to_sequence(_clean_text(m.group(1), cleaner_names))
- sequence += _arpabet_to_sequence(m.group(2))
- text = m.group(3)
-
- # Append EOS token
- sequence.append(_symbol_to_id["~"])
- return sequence
-
-
-def sequence_to_text(sequence):
- """Converts a sequence of IDs back to a string"""
- result = ""
- for symbol_id in sequence:
- if symbol_id in _id_to_symbol:
- s = _id_to_symbol[symbol_id]
- # Enclose ARPAbet back in curly braces:
- if len(s) > 1 and s[0] == "@":
- s = "{%s}" % s[1:]
- result += s
- return result.replace("}{", " ")
-
-
-def _clean_text(text, cleaner_names):
- for name in cleaner_names:
- cleaner = getattr(cleaners, name)
- if not cleaner:
- raise Exception("Unknown cleaner: %s" % name)
- text = cleaner(text)
- return text
-
-
-def _symbols_to_sequence(symbols):
- return [_symbol_to_id[s] for s in symbols if _should_keep_symbol(s)]
-
-
-def _arpabet_to_sequence(text):
- return _symbols_to_sequence(["@" + s for s in text.split()])
-
-
-def _should_keep_symbol(s):
- return s in _symbol_to_id and s not in ("_", "~")
diff --git a/spaces/matthoffner/chatbot/prettier.config.js b/spaces/matthoffner/chatbot/prettier.config.js
deleted file mode 100644
index daf4139177fd80181d50b1542647a69cd76fcac4..0000000000000000000000000000000000000000
--- a/spaces/matthoffner/chatbot/prettier.config.js
+++ /dev/null
@@ -1,25 +0,0 @@
-module.exports = {
- trailingComma: 'all',
- singleQuote: true,
- plugins: [
- 'prettier-plugin-tailwindcss',
- '@trivago/prettier-plugin-sort-imports',
- ],
- importOrder: [
- 'react', // React
- '^react-.*$', // React-related imports
- '^next', // Next-related imports
- '^next-.*$', // Next-related imports
- '^next/.*$', // Next-related imports
- '^.*/hooks/.*$', // Hooks
- '^.*/services/.*$', // Services
- '^.*/utils/.*$', // Utils
- '^.*/types/.*$', // Types
- '^.*/pages/.*$', // Components
- '^.*/components/.*$', // Components
- '^[./]', // Other imports
- '.*', // Any uncaught imports
- ],
- importOrderSeparation: true,
- importOrderSortSpecifiers: true,
-};
diff --git a/spaces/meet244/Legal-Up_Lawyer_Recommendation_System/README.md b/spaces/meet244/Legal-Up_Lawyer_Recommendation_System/README.md
deleted file mode 100644
index 12cd5facefa9c4ccd5bd2b3e01252d3ed59dbc90..0000000000000000000000000000000000000000
--- a/spaces/meet244/Legal-Up_Lawyer_Recommendation_System/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Legal-Up Lawyer Recommendation System
-emoji: ⚖
-colorFrom: blue
-colorTo: pink
-sdk: gradio
-sdk_version: 4.0.2
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/meetv25/ML/README.md b/spaces/meetv25/ML/README.md
deleted file mode 100644
index 57f32aee10c0e0655c223e0b9ecfe51367ea9f4b..0000000000000000000000000000000000000000
--- a/spaces/meetv25/ML/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: ML
-emoji: 🦀
-colorFrom: green
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.19.0
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/mega-snowman/image-to-text/app.py b/spaces/mega-snowman/image-to-text/app.py
deleted file mode 100644
index 9aeb11d6bfdcdc8b5955cdc2daca60eda2b5ecbf..0000000000000000000000000000000000000000
--- a/spaces/mega-snowman/image-to-text/app.py
+++ /dev/null
@@ -1,44 +0,0 @@
-import gradio as gr
-from transformers import pipeline
-from PIL import Image
-import numpy as np
-
-def process_image(image):
- if image is None:
- yield [None, None, None]
- return
-
- model = pipeline("image-segmentation")
- scores = model(image)
-
- text = []
- label = {}
- sections = []
- for s in scores:
- if s['label'].startswith('LABEL_'):
- continue
- print(s)
- text.append(s['label'])
- label[s['label']] = s['score']
- mask = np.array(s['mask'])
- mask = np.array(list(map(lambda l: list(map(lambda x: 1 if x > 0 else 0, l)), mask)))
- sections.append((mask, s['label']))
-
- yield [','.join(text), label, (image, sections)]
-
-app = gr.Interface(
- title='Image To Text',
- #description='Image To Text',
- fn=process_image,
- inputs=gr.Image(type='pil'),
- outputs=[
- gr.Textbox(label='text'),
- gr.Label(label='scores'),
- gr.AnnotatedImage(label='segmentation'),
- ],
- allow_flagging='never',
- examples=[['examples/sample1.jpg'], ['examples/sample2.jpg']],
- #cache_examples=False
-)
-app.queue(concurrency_count=20)
-app.launch()
diff --git a/spaces/mehdidc/text_to_image_ddgan/README.md b/spaces/mehdidc/text_to_image_ddgan/README.md
deleted file mode 100644
index 21be90e3a14cb5cbe5f1a96359ae6821d649c258..0000000000000000000000000000000000000000
--- a/spaces/mehdidc/text_to_image_ddgan/README.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-
-title: Text To Image DDGAN
-
-emoji: 🐢
-
-colorFrom: red
-
-colorTo: purple
-
-sdk: gradio
-
-sdk_version: 3.8.2
-
-app_file: app.py
-
-pinned: false
-
----
-
-Text-to-Image Denoising Diffusion GANs is a text-to-image model
-based on [Denoising Diffusion GANs](https://arxiv.org/abs/2112.07804>).
-The code is based on their official [code](https://nvlabs.github.io/denoising-diffusion-gan/),
-which is updated to support text conditioning. Many thanks to the authors of DDGAN for releasing
-the code.
-
-The provided models are trained on [Diffusion DB](https://arxiv.org/abs/2210.14896), which is a dataset that was synthetically
-generated with Stable Diffusion, many thanks to the authors for releasing the dataset.
-
-Models were trained on [JURECA-DC](https://www.fz-juelich.de/en/news/archive/press-release/2021/2021-06-23-jureca-dc) supercomputer at Jülich Supercomputing Centre (JSC), many thanks for the compute provided to train the models.
diff --git a/spaces/mehedihassan/stabilityai-StableBeluga/app.py b/spaces/mehedihassan/stabilityai-StableBeluga/app.py
deleted file mode 100644
index 39f1b103c885a2f8faf99cd31c9109f814704d14..0000000000000000000000000000000000000000
--- a/spaces/mehedihassan/stabilityai-StableBeluga/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/stabilityai/StableBeluga2").launch()
\ No newline at end of file
diff --git a/spaces/merve/anonymization/public/data-leak/script.js b/spaces/merve/anonymization/public/data-leak/script.js
deleted file mode 100644
index 16e45229aac271f5fb29b638c14822725a392865..0000000000000000000000000000000000000000
--- a/spaces/merve/anonymization/public/data-leak/script.js
+++ /dev/null
@@ -1,296 +0,0 @@
-console.clear()
-
-var isMobile = innerWidth < 1000
-d3.select('body').classed('is-mobile', isMobile)
-
-var colors = ['#FDE100', '#EE2737' ]
-var colors = ['#FDE100', '#8e068e' ]
-// var colors = ['#2979FF', '#FF6D00']
-// var colors = ['#2979FF', '#FDD835']
-// var colors = ['#f1a340', '#998ec3' ]
-
-var color2dark = {
- '#FDE100': d3.color('#FDE100').darker(.2),
- '#8e068e': d3.color('#8e068e').darker(2),
-}
-
-var colorScale = d3.interpolate(colors[0], colors[1])
-
-var s = d3.select('#field-grass').node().offsetWidth/120
-
-var width = 120*s
-var height = Math.floor(75*s)
-
-var cs = 20
-var cells = d3.cross(
- d3.range(0, width + cs, cs),
- d3.range(0, height + cs, cs))
-
-
-
-globalPlayers = decoratePlayers(players0)
-globalPlayersH = decoratePlayers(playersleaklow)
-
-function decoratePlayers(rawPlayers){
- var players = rawPlayers.map(d => d.map(d => d*s))
- players.forEach((d, i) => {
- d.color = i < 11 ? colors[0] : colors[1]
- d.isRed = i < 11 ? 1 : 0
- d.i = i
- })
-
- players.renderFns = []
- players.renderAll = () => players.renderFns.forEach(d => d())
-
- return players
-}
-
-var playerOptions0 = [players1, players2, players0]
-var playerOptions1 = [playersleaklow, playersleakhigh]
-
-// addPlayAnimation(globalPlayers, '#field-grass', playerOptions0, 'mouseenter')
-addPlayAnimation(globalPlayers, '#player-button', playerOptions0)
-addPlayAnimation(globalPlayersH, '#high-button', playerOptions1, 'click', true)
-
-function addPlayAnimation(players, selStr, playerOptions, eventStr='click', loop=false){
- if (loop) {
- window.loopInterval = d3.interval(playAnimation, 2500)
- }
- if (selStr) {
- d3.selectAll(selStr).on(eventStr, function() {
- if (loop) window.loopInterval.stop() // stop looping if the higher-or-lower button is pressed
- playAnimation()
- })
- }
-
- var curPlayerIndex = 0
- function playAnimation(){
- curPlayerIndex++
- curPlayerIndex = curPlayerIndex % playerOptions.length
-
- var nextPlayers = playerOptions[curPlayerIndex]
- .map(d => d.map(d => d*s))
-
- var interpolates = players
- .map((d, i) => d3.interpolate(d, nextPlayers[i]))
-
- var dur = 1000
- if (playerOptions.animationTimer) playerOptions.animationTimer.stop()
- playerOptions.animationTimer = d3.timer(time => {
- var t = d3.clamp(0, time/dur, 1)
-
- interpolates.forEach((interpolate, i) => {
- var [x, y] = interpolate(t)
-
- players[i][0] = x
- players[i][1] = y
- })
-
- players.renderAll(t)
-
- if (t == 1) playerOptions.animationTimer.stop()
- })
- }
-}
-
-function stopAnimations(){
- if (playerOptions0.animationTimer) playerOptions0.animationTimer.stop()
- if (playerOptions1.animationTimer) playerOptions1.animationTimer.stop()
-}
-
-
-function initField(name){
- var marginBottom = 30
- var marginTop = 35
- var sel = d3.select('#field-' + name).html('').classed('field', true)
- .st({marginBottom: marginBottom, marginTop: marginTop})
-
- window.c = d3.conventions({
- sel,
- margin: {top: 0, left: 0, right: 0, bottom: 0},
- width,
- height,
- layers: 'dcs'
- })
-
- var [divSel, ctx, svg] = c.layers
-
- c.svg = c.svg.append('g').translate([.5, .5])
-
- var isRegression = name.includes('regression')
- var isVisiblePoints = name != 'playerless'
-
- var pointName = isRegression || name == 'scatter' ? ' People' : ' Players'
- var buttonSel = sel.append('div.button')
- .st({top: pointName == ' People' ? 28 : -8, right: -8, position: 'absolute', background: '#fff'})
- .text((isVisiblePoints ? 'Hide' : 'Show') + pointName)
- .on('click', () => {
- isVisiblePoints = !isVisiblePoints
- buttonSel.text((isVisiblePoints ? 'Hide' : 'Show') + pointName)
- playerSel.st({opacity: isVisiblePoints ? 1 : 0})
- textSel.st({opacity: isVisiblePoints ? 1 : 0})
- })
-
- if (name == 'grass'){
- c.svg.append('rect').at({width, height, fill: '#34A853'})
- divSel.append('div.pointer').append('div')
- }
-
- var roundNum = d => isNaN(d) ? d : Math.round(d)
- var chalkSel = c.svg.append('g')
- chalkSel.append('path.white')
- .at({d: ['M', Math.round(width/2), 0, 'V', height].map(roundNum).join(' '),})
- chalkSel.append('circle.white')
- .at({r: 10*s}).translate([width/2, height/2])
- chalkSel.append('path.white')
- .at({d: ['M', 0, (75 - 44)/2*s, 'h', 18*s, 'v', 44*s, 'H', 0].map(roundNum).join(' '),})
- chalkSel.append('path.white')
- .at({d: ['M', width, (75 - 44)/2*s, 'h', -18*s, 'v', 44*s, 'H', width].map(roundNum).join(' '),})
-
- var drag = d3.drag()
- .on('drag', function(d){
- stopAnimations()
- if (name === 'regression-leak') {
- window.loopInterval.stop()
- }
-
- d[0] = Math.round(Math.max(0, Math.min(width, d3.event.x)))
- d[1] = Math.round(Math.max(0, Math.min(height, d3.event.y)))
-
- players.renderAll()
- })
- .subject(function(d){ return {x: d[0], y: d[1]} })
-
-
- var players = name == 'regression-leak' ? globalPlayersH : globalPlayers
-
- if (isRegression){
- var byColor = d3.nestBy(players, d => d.color)
- var regressionSel = c.svg.appendMany('path', byColor)
- .at({stroke: d => color2dark[d.key], strokeWidth: 3.5, strokeDasharray: '4 4'})
- .each(function(d){ d.sel = d3.select(this) })
- }
-
- var bgPlayerSel = c.svg.appendMany('circle.player', players)
- .at({r: 15, fill: d => d.color, opacity: 0})
- .translate(d => d)
- .call(drag)
-
- var playerSel = c.svg.appendMany('circle.player', players)
- .at({r: 5, fill: d => d.color, opacity: isVisiblePoints ? 1 : 0})
- .translate(d => d)
- .call(drag)
-
- var textSel = c.svg.appendMany('text.chart-title', name == 'playerless' ? [players[0], players[20]] : [players[0]])
- .text(name == 'regression-leak' || name == 'scatter' ? 'New Hire' : name == 'playerless' ? 'Goalie' : '')
- .st({pointerEvent: 'none'})
- .at({dy: '.33em', opacity: isVisiblePoints ? 1 : 0, dx: (d, i) => i ? -8 : 8, textAnchor: (d, i) => i ? 'end' : 'start'})
-
- if (name == 'scatter' || isRegression){
- sel.st({marginBottom: marginBottom + 70})
- sel.insert('div.axis.chart-title', ':first-child')
- .html(`
- Men's
- and
- Women's
- Salaries`)
- .st({marginBottom: 10, fontSize: 16})
-
- c.x.domain([0, 20])
- c.y.domain([40000, 90000])
-
- c.xAxis.ticks(5)
- c.yAxis.ticks(5).tickFormat(d => {
- var rv = d3.format(',')(d).replace('9', '$9')
- if (isMobile){
- rv = rv.replace(',000', 'k').replace('40k', '')
- }
-
- return rv
- })
-
-
-
- chalkSel.selectAll('*').remove()
- chalkSel.appendMany('path.white', c.x.ticks(5))
- .at({d: d => ['M', Math.round(c.x(d)), '0 V ', c.height].join(' ')})
-
- chalkSel.appendMany('path.white', c.y.ticks(5))
- .at({d: d => ['M 0', Math.round(c.y(d)), 'H', c.width].join(' ')})
-
- d3.drawAxis(c)
- c.svg.selectAll('.axis').lower()
- if (isMobile){
- c.svg.selectAll('.y text')
- .translate([35, 10])
- .st({fill: name == 'scatter' ? '#000' : ''})
-
- c.svg.selectAll('.x text').filter(d => d == 20).at({textAnchor: 'end'})
- c.svg.selectAll('.x text').filter(d => d == 0).at({textAnchor: 'start'})
- }
-
-
- c.svg.select('.x').append('text.chart-title')
- .text('Years at Company →')
- .translate([c.width/2, 43])
- .at({textAnchor: 'middle'})
- }
-
-
-
- render()
- players.renderFns.push(render)
- function render(){
- renderSVG()
- if (name != 'grass' && !isRegression)renderCanvas()
- if (isRegression) renderRegression()
- }
-
- function renderSVG(){
- if (playerSel){
- playerSel.translate(d => d)
- bgPlayerSel.translate(d => d)
- textSel.translate(d => d)
- }
- }
-
- function renderCanvas(){
- cells.forEach(d => {
- players.forEach(p => {
- var dx = p[0] - d[0] - cs/2
- var dy = p[1] - d[1] - cs/2
-
- // p.dist = Math.sqrt(dx*dx + dy*dy)
- // p.dist = dx*dx + dy*dy
- p.dist = Math.pow(dx*dx + dy*dy, 1.5) + .00001
- p.weight = 1/p.dist
-
- return p.dist
- })
-
- var sum = d3.sum(players, d => d.isRed*d.weight)
- var wsum = d3.sum(players, d => d.weight)
-
- ctx.fillStyle = colorScale(1 - sum/wsum)
-
- ctx.fillRect(d[0], d[1], cs, cs)
- })
- }
-
- function renderRegression(){
- byColor.forEach(d => {
- var l = ss.linearRegressionLine(ss.linearRegression(d))
-
- var x0 = 0
- var x1 = c.width
-
- d.sel.at({d: `M ${x0} ${l(x0)} L ${x1} ${l(x1)}`})
- })
- }
-}
-
-'grass prediction playerless scatter regression regression-leak'
- .split(' ')
- .forEach(initField)
-
-
diff --git a/spaces/merve/measuring-fairness/public/hidden-bias/script.js b/spaces/merve/measuring-fairness/public/hidden-bias/script.js
deleted file mode 100644
index 526901a0178a3ef069380410dd33fdc0334f2bae..0000000000000000000000000000000000000000
--- a/spaces/merve/measuring-fairness/public/hidden-bias/script.js
+++ /dev/null
@@ -1,467 +0,0 @@
-/* Copyright 2020 Google LLC. All Rights Reserved.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-==============================================================================*/
-
-
-var ttSel = d3.select('body').selectAppend('div.tooltip.tooltip-hidden')
-
-var colors = {
- m: '#7DDAD3',
- f: '#9B86EF',
- h: '#F0BD80',
- l: '#FF777B',
- grey: '#ccc',
-}
-
-
-var totalWidth = width = d3.select('#graph').node().offsetWidth
-var r = 40
-
-var sel = d3.select('#graph').html('')
- .append('div')
-
-var extraWidth = d3.clamp(500, innerHeight - 150, innerWidth - 500)
-var scale = extraWidth/500
-scale = 1
-sel.st({transform: `scale(${scale})`, transformOrigin: '0% 0%'})
-
-var c = d3.conventions({
- sel,
- totalWidth,
- totalHeight: totalWidth,
- margin: {left: 25, right: 7},
- layers: 'sd',
-})
-var divSel = c.layers[1]
-
-c.x.domain([1, 4]).clamp(true).interpolate(d3.interpolateRound)
-c.y.domain([1, 4]).clamp(true).interpolate(d3.interpolateRound)
-
-c.xAxis.ticks(3).tickFormat(d3.format('.1f'))
-c.yAxis.ticks(3).tickFormat(d3.format('.1f'))
-d3.drawAxis(c)
-
-var axis2Sel= c.svg.append('g.axis').append('line')
- .translate(Math.round(c.y(2)) + .5, 1)
- .at({x2: c.width, stroke: '#000', opacity: 0})
-
-var meanGPADiff = .6
-
-var seed = new Math.seedrandom('hii')
-var students = d3.range(150).map((d, index) => {
- var collegeGPA = d3.randomUniform.source(seed)(1, 4)()
-
- // if (index == 93) collegeGPA = 2.05
- // if (index == 87) collegeGPA = 2.15
- // if (index == 32) collegeGPA = 2.25
- if (index == 131) collegeGPA = 3.9
-
- // var hsGPA = collegeGPA*d3.randomNormal(1, .4)()
- var hsGPA = collegeGPA + d3.randomNormal.source(seed)(meanGPADiff, .8)()
- var hsGPAadjusted = hsGPA - meanGPADiff
-
- var rand = d3.randomUniform.source(seed)(0, 1)
-
- var isMale = rand() < .5
- var name = names[isMale ? 'm' : 'f'][Math.floor(d/2)]
- var lastName = names.last[d]
- var maleOffset = rand()*(isMale ? 1 : -1)*.6
-
- // if (index == 47) name = 'Mia'
- // if (index == 82) name = 'Mason'
-
-
- var compGPA0 = lerp(hsGPAadjusted, collegeGPA, rand()*.7) + maleOffset
- var compGPA1 = lerp(compGPA0, collegeGPA + maleOffset, rand()*1.1)
- var compGPA2 = compGPA1 + rand()/4 - 1/4/2
- // var compGPA0 = collegeGPA + d3.randomNormal.source(seed)(0, .5)()
- // var compGPA1 = collegeGPA + d3.randomNormal.source(seed)(0, .3)()
-
- if (index == 69){
- compGPA1 = 2.0
- }
- if (index == 37){
- compGPA1 = 2.0
- }
-
-
- var isLowIncome = rand() < .5
-
- var inteviewGPA = collegeGPA + d3.randomNormal.source(seed)(0, .15)()
- var inteviewGPAbias = inteviewGPA + rand()*(isLowIncome ? -1 : 1)*.5
-
- // if (index == 115) name = 'Mason'
- // if (index == 32) name = 'Mia'
-
- if (name == 'Camila') name = 'Mia'
-
-
- return {name, index, lastName, collegeGPA, hsGPA, hsGPAadjusted, compGPA0, compGPA1, compGPA2, isMale, isLowIncome, inteviewGPA, inteviewGPAbias}
-})
-
-students = _.sortBy(students, d => d.collegeGPA)
-
-students = students.filter(d => {
- return d3.entries(d).every(({key, value}) => {
- if (!key.includes('GPA')) return true
-
- return 1 < value && value < 4.0
- })
-})
-
-
-c.svg.append('path')
- .at({
- d: ['M', 0, c.height, 'L', c.width, 0].join(' '),
- stroke: '#ccc',
- strokeWidth: 2,
- strokeDasharray: '4 2'
- })
-
-!(function(){
- // return window.annotationSel = d3.select(null)
- var isDrag = 0
- if (!isDrag) annotations.forEach(d => d.text = d.html ? '' : d.text)
- if (isDrag){
- d3.select('#sections').st({pointerEvents: 'none'})
- }
-
- // copy('window.annotations = ' + JSON.stringify(annotations, null, 2))
- var swoopy = d3.swoopyDrag()
- .x(d => c.x(d.x))
- .y(d => c.y(d.y))
- .draggable(isDrag)
- .annotations(annotations)
- .on('drag', d => {
-
- })
-
-
- var htmlAnnoSel = divSel.appendMany('div.annotation', annotations.filter(d => d.html))
- .translate(d => [c.x(d.x), c.y(d.y)]).st({position: 'absolute', opacity: 0})
- .append('div')
- .translate(d => d.textOffset)
- .html(d => d.html)
- .st({width: 150})
-
-
-
- var swoopySel = c.svg.append('g.annotations').call(swoopy)
-
- c.svg.append('marker')
- .attr('id', 'arrow')
- .attr('viewBox', '-10 -10 20 20')
- .attr('markerWidth', 20)
- .attr('markerHeight', 20)
- .attr('orient', 'auto')
- .append('path')
- .attr('d', 'M-6.75,-6.75 L 0,0 L -6.75,6.75')
-
- swoopySel.selectAll('path')
- .attr('marker-end', 'url(#arrow)')
- .st({'opacity': d => d.path == 'M 0 0' ? 0 : 1})
- window.annotationSel = swoopySel.selectAll('g')
- .st({fontSize: 12, opacity: d => d.slide == 0 ? 1 : 0})
-
- window.annotationSel = d3.selectAll('g.annotations g, div.annotation')
-
- swoopySel.selectAll('text')
- .each(function(d){
- d3.select(this)
- .text('') //clear existing text
- .tspans(d3.wordwrap(d.text, d.width || 20), 13) //wrap after 20 char
- })
- })()
-
-
-
-students = _.sortBy(students, d => d.collegeGPA)
-var lineSel = c.svg.appendMany('path', students)
- .translate(d => [c.x(d.hsGPA), c.y(d.collegeGPA)])
- .at({
- // fill: d => d.hsGPA > d.collegeGPA ? 'blue' : 'orange',
- fill: '#eee',
- stroke: '#aaa',
- strokeWidth: .5,
- opacity: 0,
- // strokeWidth: 1/scale,
- })
-
-
-var circleSel = c.svg.appendMany('g', students)
- .translate(d => [c.x(d.collegeGPA), c.y(d.hsGPA)])
- .call(d3.attachTooltip)
- .on('mouseover', d => {
- var html = ''
- html += `
${d.name} ${d.lastName}
`
-
- if (curSlide.circleFill == 'gender'){
- html += `${d.isMale ? 'Male' : 'Female'}`
- }
-
- if (curSlide.circleFill == 'income'){
- html += `${d.isLowIncome ? 'Low Income' : 'High Income'}`
- }
- html += `
-
${d3.format('.2f')(d.collegeGPA).slice(0, 4)} College GPA
`
-
- ttSel.html(html)
- })
-
-
-var innerCircleSel = circleSel.append('circle')
- .at({
- r: 5,
- fill: '#eee',
- stroke: '#aaa'
- })
-
-// var textSel = circleSel.append('text').text(d => d.isMale ? 'M' : 'F')
-// .at({textAnchor: 'middle', dy: '.33em', fontSize: 8, fill: '#eee'})
-// var textSel2 = circleSel.append('text').text(d => d.isLowIncome ? 'L' : 'H')
-// .at({textAnchor: 'middle', dy: '.33em', fontSize: 8, opacity: 0})
-
-
-c.svg.select('.y').selectAll('line').filter(d => d == 4)
- .remove()
-c.svg.select('.y').selectAll('text').filter(d => d == 4)
- .select(function() {
- return this.parentNode.insertBefore(this.cloneNode(1), this.nextSibling);
- })
- .text('Actual College GPA')
- .at({x: c.width/2, y: c.height + 35, textAnchor: 'middle', fontWeight: 800})
-
-var yLabelSel = divSel.st({pointerEvents: 'none'}).append('div.axis')
- .html('High School GPA')
- .translate([0, -9])
- .st({textAlign: 'left', maxWidth: 260})
-
-// c.svg.append('text').text('Actual College GPA').st({fontWeight: 800})
-
-var longLabel = 'high school GPA, essay, clubs, zip code, teacher recommendations, sports, AP scores, demonstrated interest, gender, SAT scores, interviews, portfolio, race, work experience'
-
-var slides = [
- {
- yKey: 'hsGPA',
- isLineVisible: 0,
- yLabel: 'High School GPA',
- circleFill: 'grey',
- circleFillDelay: d => 0,
- },
-
- {
- yKey: 'hsGPA',
- isLineVisible: true,
- yLabel: 'High School GPA'
- },
-
- {
- yKey: 'hsGPAadjusted',
- yLabel: 'high school GPA'
- },
-
- {
- yKey: 'compGPA0',
- yLabel: 'high school GPA, essay, clubs, zip code'.replace('essay', 'essay') + ''
- },
-
- {
- yKey: 'compGPA1',
- yLabel: longLabel.replace('teacher', 'teacher') + '',
- circleFill: 'grey',
- circleFillDelay: d => 0,
- textFill: '#eee',
- },
-
- {
- yKey: 'compGPA1',
- yLabel: longLabel,
- circleFill: 'gender',
- circleFillDelay: (d, i) => i*20 + (d.isMale ? 0 : 2000),
- textFill: '#000',
- },
-
- {
- name: 'proxyHighlight',
- yKey: 'compGPA2',
- yLabel: longLabel,
- circleFill: 'gender',
- circleFillDelay: d => 0,
- textFill: '#000',
- },
-
- {
- textFill: '#eee',
- yLabel: 'Alumni interview',
- yKey: 'inteviewGPAbias',
- circleFill: 'grey',
- text2Opacity: 0,
- },
-
- {
- textFill: '#eee',
- yLabel: 'Alumni interview',
- yKey: 'inteviewGPAbias',
- circleFill: 'income',
- circleFillDelay: (d, i) => i*20 + (!d.isLowIncome ? 2000 : 0),
- text2Opacity: 1,
- },
-
- {
- textFill: '#eee',
- yLabel: 'Alumni interview, household income'.replace('household', 'household') + '',
- yKey: 'inteviewGPA',
- text2Opacity: 1,
- },
-]
-
-slides.forEach(d => {
- if (d.name == 'proxyHighlight'){
- var proxies = 'clubs, interviews, portfolio, sports'.split(', ')
- d.yLabel = d.yLabel
- .split(', ')
- .map(d => {
- if (d == 'gender') return `gender`
- if (!proxies.includes(d)) return d
-
- return `${d}`
- })
- .join(', ')
- }
-
-
- if (d.yLabel[0] != '<') d.yLabel = 'Predicted College GPA using ' + d.yLabel.replace('School', 'school')
-})
-
-var keys = []
-slides.forEach(d => keys = keys.concat(d3.keys(d)))
-_.uniq(keys).forEach(str => {
- var prev = null
- slides.forEach(d => {
- if (typeof(d[str]) === 'undefined'){
- d[str] = prev
- }
- prev = d[str]
- })
-})
-
-slides.forEach((d, i) => {
- d.circleFillFn = {
- grey: d => '#eee',
- gender: d => d.isMale ? colors.m : colors.f,
- income: d => d.isLowIncome ? colors.l : colors.h,
- }[d.circleFill]
-
- d.index = i
-})
-
-
-
-
-var gs = d3.graphScroll()
- .container(d3.select('.container-1'))
- .graph(d3.selectAll('container-1 #graph'))
- .eventId('uniqueId1')
- .sections(d3.selectAll('.container-1 #sections > div'))
- .offset(innerWidth < 900 ? 300 : 520)
- .on('active', updateSlide)
-
-
-var prevSlide = -1
-function updateSlide(i){
- var slide = slides[i]
- if (!slide) return
- curSlide = slide
- var {yKey} = slide
-
- lineSel.transition('yKey').duration(500)
- .at({
- d: d => [
- 'M 5 0',
- 'C 0 0',
- 0, c.y(d['collegeGPA']) - c.y(d[yKey]),
- 0, c.y(d['collegeGPA']) - c.y(d[yKey]),
- 'S 0 0 -5.5 0'
- ].join(' ')
- })
- .translate(d => [c.x(d.collegeGPA), c.y(d[yKey])])
-
-
- circleSel.transition('yKey').duration(500)
- .translate(d => [c.x(d.collegeGPA), c.y(d[yKey])])
-
- innerCircleSel.transition('colorFill').duration(30)
- .delay(slide.circleFillDelay)
- .at({
- fill: slide.circleFillFn,
- stroke: d => d3.color(slide.circleFillFn(d)).darker(1.5)
- })
-
- axis2Sel.transition()
- .st({opacity: i == 5 ? 1 : 0})
-
- lineSel.transition('opacity').duration(500)
- .st({
- opacity: slide.isLineVisible ? 1 : 0
- })
-
- if (slide.yLabel) yLabelSel.html(slide.yLabel)
-
-
- annotationSel.transition()
- .st({opacity: d => i == d.slide ? 1 : 0})
-
-
-
- prevSlide = i
-}
-
-slide = slides[0]
-
-
-
-
-d3.selectAll('.circle').each(function(){
- var d = d3.select(this).attr('class').split(' ')[0]
-
- d3.select(this)
- .st({
- backgroundColor: d3.color(colors[d]),
- borderColor: d3.color(colors[d]).darker(1.5),
- })
-
-
-})
-
-
-
-
-function lerp(a, b, t){ return a + t*(b - a) }
-
-
-
-c.svg.selectAll('g.annotations').raise()
-
-
-
-d3.selectAll('#sections img').attr('aria-hidden', true)
-
-
-
-
-
-
-
-
diff --git a/spaces/merve/uncertainty-calibration/public/third_party/topojson-server.js b/spaces/merve/uncertainty-calibration/public/third_party/topojson-server.js
deleted file mode 100644
index 1dd21b5598fb337243b0e2be15d44d95e32ae03d..0000000000000000000000000000000000000000
--- a/spaces/merve/uncertainty-calibration/public/third_party/topojson-server.js
+++ /dev/null
@@ -1,2 +0,0 @@
-// https://github.com/topojson/topojson-server v3.0.1 Copyright 2019 Mike Bostock
-!function(r,n){"object"==typeof exports&&"undefined"!=typeof module?n(exports):"function"==typeof define&&define.amd?define(["exports"],n):n((r=r||self).topojson=r.topojson||{})}(this,function(r){"use strict";var n=Object.prototype.hasOwnProperty;function t(r,n,t,e,o,i){3===arguments.length&&(e=i=Array,o=null);for(var a=new e(r=1<=r)throw new Error("full hashmap");l=a[c=c+1&f]}return a[c]=e,u[c]=i,i},maybeSet:function(e,i){for(var c=n(e)&f,l=a[c],s=0;l!=o;){if(t(l,e))return u[c];if(++s>=r)throw new Error("full hashmap");l=a[c=c+1&f]}return a[c]=e,u[c]=i,i},get:function(e,i){for(var c=n(e)&f,l=a[c],s=0;l!=o;){if(t(l,e))return u[c];if(++s>=r)break;l=a[c=c+1&f]}return i},keys:function(){for(var r=[],n=0,t=a.length;n>7^a[2]^a[3])}function f(r){var n,o,i,a,f=r.coordinates,c=r.lines,l=r.rings,s=function(){for(var r=t(1.4*f.length,A,E,Int32Array,-1,Int32Array),n=new Int32Array(f.length),e=0,o=f.length;e=0){var i=v[t];o===n&&i===e||o===e&&i===n||(++y,p[t]=1)}else g[t]=n,v[t]=e}}function A(r){return u(f[r])}function E(r,n){return e(f[r],f[n])}h=g=v=null;var L,S=function(r,n,t,e,o){3===arguments.length&&(e=Array,o=null);for(var i=new e(r=1<=r)throw new Error("full hashset");f=i[u=u+1&a]}return i[u]=e,!0},has:function(e){for(var u=n(e)&a,f=i[u],c=0;f!=o;){if(t(f,e))return!0;if(++c>=r)break;f=i[u=u+1&a]}return!1},values:function(){for(var r=[],n=0,t=i.length;n>1);no&&(o=n),ai&&(i=a)}function c(r){r.forEach(f)}function l(r){r.forEach(c)}for(var s in r)a(r[s]);return o>=t&&i>=e?[t,e,o,i]:void 0}(r=function(r){var n,t,e={};for(n in r)e[n]=null==(t=r[n])?{type:null}:("FeatureCollection"===t.type?function(r){var n={type:"GeometryCollection",geometries:r.features.map(l)};return null!=r.bbox&&(n.bbox=r.bbox),n}:"Feature"===t.type?l:s)(t);return e}(r)),a=o>0&&i&&function(r,t,e){var o=t[0],i=t[1],a=t[2],u=t[3],f=a-o?(e-1)/(a-o):1,c=u-i?(e-1)/(u-i):1;function l(r){return[Math.round((r[0]-o)*f),Math.round((r[1]-i)*c)]}function s(r,n){for(var t,e,a,u,l,s=-1,h=0,g=r.length,v=new Array(g);++s d.x)
- .attr('y', d => d.y)
- .attr('width', d => d.width)
- .attr('height', d => d.height)
- .attr('xlink:href', d => d.path)
- .attr('alt', d => d.alt)
-
-
- var buttonHeight = 35
- var buttonWidth = 130
-
- var buttonSel = c.svg.appendMany('g.photo-button', data)
- .translate((d,i) => [(i * 170) + 100, 0])
- .at({
- // class: "dropdown"
- })
- .on('click', function(d, i){
- photoIndex = i
- setActiveImage()
- timer.stop();
- })
-
- buttonSel.append('rect')
- .at({
- height: buttonHeight,
- width: buttonWidth,
- // fill: '#fff'
- })
-
- buttonSel.append('text')
- .at({
- textAnchor: 'middle',
- // dominantBaseline: 'central',
- dy: '.33em',
- x: buttonWidth/2,
- y: buttonHeight/2,
- class: "monospace"
- })
- .text((d,i) => 'ground truth ' + (i + 1))
-
- // buttonSel.classed('dropdown', true);
-
- if (window.__photoPersonTimer) window.__photoPersonTimer.stop()
- var timer = window.__photoPersonTimer = d3.interval(() => {
- photoIndex = (photoIndex + 1) % data.length;
- setActiveImage()
- }, 2000)
-
- function setActiveImage(i){
- photoSel.st({opacity: (d, i) => i == photoIndex ? 1 : 0 })
- buttonSel.classed('is-active-button', (d, i) => i == photoIndex)
- }
- setActiveImage()
-}
-
-createPhotoScroller();
-
-
-
-
diff --git a/spaces/mfrashad/CharacterGAN/models/stylegan2/stylegan2-pytorch/train.py b/spaces/mfrashad/CharacterGAN/models/stylegan2/stylegan2-pytorch/train.py
deleted file mode 100644
index 7295f159b0427aef89a5944a0d1eb4c23ee85a7f..0000000000000000000000000000000000000000
--- a/spaces/mfrashad/CharacterGAN/models/stylegan2/stylegan2-pytorch/train.py
+++ /dev/null
@@ -1,413 +0,0 @@
-import argparse
-import math
-import random
-import os
-
-import numpy as np
-import torch
-from torch import nn, autograd, optim
-from torch.nn import functional as F
-from torch.utils import data
-import torch.distributed as dist
-from torchvision import transforms, utils
-from tqdm import tqdm
-
-try:
- import wandb
-
-except ImportError:
- wandb = None
-
-from model import Generator, Discriminator
-from dataset import MultiResolutionDataset
-from distributed import (
- get_rank,
- synchronize,
- reduce_loss_dict,
- reduce_sum,
- get_world_size,
-)
-
-
-def data_sampler(dataset, shuffle, distributed):
- if distributed:
- return data.distributed.DistributedSampler(dataset, shuffle=shuffle)
-
- if shuffle:
- return data.RandomSampler(dataset)
-
- else:
- return data.SequentialSampler(dataset)
-
-
-def requires_grad(model, flag=True):
- for p in model.parameters():
- p.requires_grad = flag
-
-
-def accumulate(model1, model2, decay=0.999):
- par1 = dict(model1.named_parameters())
- par2 = dict(model2.named_parameters())
-
- for k in par1.keys():
- par1[k].data.mul_(decay).add_(1 - decay, par2[k].data)
-
-
-def sample_data(loader):
- while True:
- for batch in loader:
- yield batch
-
-
-def d_logistic_loss(real_pred, fake_pred):
- real_loss = F.softplus(-real_pred)
- fake_loss = F.softplus(fake_pred)
-
- return real_loss.mean() + fake_loss.mean()
-
-
-def d_r1_loss(real_pred, real_img):
- grad_real, = autograd.grad(
- outputs=real_pred.sum(), inputs=real_img, create_graph=True
- )
- grad_penalty = grad_real.pow(2).view(grad_real.shape[0], -1).sum(1).mean()
-
- return grad_penalty
-
-
-def g_nonsaturating_loss(fake_pred):
- loss = F.softplus(-fake_pred).mean()
-
- return loss
-
-
-def g_path_regularize(fake_img, latents, mean_path_length, decay=0.01):
- noise = torch.randn_like(fake_img) / math.sqrt(
- fake_img.shape[2] * fake_img.shape[3]
- )
- grad, = autograd.grad(
- outputs=(fake_img * noise).sum(), inputs=latents, create_graph=True
- )
- path_lengths = torch.sqrt(grad.pow(2).sum(2).mean(1))
-
- path_mean = mean_path_length + decay * (path_lengths.mean() - mean_path_length)
-
- path_penalty = (path_lengths - path_mean).pow(2).mean()
-
- return path_penalty, path_mean.detach(), path_lengths
-
-
-def make_noise(batch, latent_dim, n_noise, device):
- if n_noise == 1:
- return torch.randn(batch, latent_dim, device=device)
-
- noises = torch.randn(n_noise, batch, latent_dim, device=device).unbind(0)
-
- return noises
-
-
-def mixing_noise(batch, latent_dim, prob, device):
- if prob > 0 and random.random() < prob:
- return make_noise(batch, latent_dim, 2, device)
-
- else:
- return [make_noise(batch, latent_dim, 1, device)]
-
-
-def set_grad_none(model, targets):
- for n, p in model.named_parameters():
- if n in targets:
- p.grad = None
-
-
-def train(args, loader, generator, discriminator, g_optim, d_optim, g_ema, device):
- loader = sample_data(loader)
-
- pbar = range(args.iter)
-
- if get_rank() == 0:
- pbar = tqdm(pbar, initial=args.start_iter, dynamic_ncols=True, smoothing=0.01)
-
- mean_path_length = 0
-
- d_loss_val = 0
- r1_loss = torch.tensor(0.0, device=device)
- g_loss_val = 0
- path_loss = torch.tensor(0.0, device=device)
- path_lengths = torch.tensor(0.0, device=device)
- mean_path_length_avg = 0
- loss_dict = {}
-
- if args.distributed:
- g_module = generator.module
- d_module = discriminator.module
-
- else:
- g_module = generator
- d_module = discriminator
-
- accum = 0.5 ** (32 / (10 * 1000))
-
- sample_z = torch.randn(args.n_sample, args.latent, device=device)
-
- for idx in pbar:
- i = idx + args.start_iter
-
- if i > args.iter:
- print("Done!")
-
- break
-
- real_img = next(loader)
- real_img = real_img.to(device)
-
- requires_grad(generator, False)
- requires_grad(discriminator, True)
-
- noise = mixing_noise(args.batch, args.latent, args.mixing, device)
- fake_img, _ = generator(noise)
- fake_pred = discriminator(fake_img)
-
- real_pred = discriminator(real_img)
- d_loss = d_logistic_loss(real_pred, fake_pred)
-
- loss_dict["d"] = d_loss
- loss_dict["real_score"] = real_pred.mean()
- loss_dict["fake_score"] = fake_pred.mean()
-
- discriminator.zero_grad()
- d_loss.backward()
- d_optim.step()
-
- d_regularize = i % args.d_reg_every == 0
-
- if d_regularize:
- real_img.requires_grad = True
- real_pred = discriminator(real_img)
- r1_loss = d_r1_loss(real_pred, real_img)
-
- discriminator.zero_grad()
- (args.r1 / 2 * r1_loss * args.d_reg_every + 0 * real_pred[0]).backward()
-
- d_optim.step()
-
- loss_dict["r1"] = r1_loss
-
- requires_grad(generator, True)
- requires_grad(discriminator, False)
-
- noise = mixing_noise(args.batch, args.latent, args.mixing, device)
- fake_img, _ = generator(noise)
- fake_pred = discriminator(fake_img)
- g_loss = g_nonsaturating_loss(fake_pred)
-
- loss_dict["g"] = g_loss
-
- generator.zero_grad()
- g_loss.backward()
- g_optim.step()
-
- g_regularize = i % args.g_reg_every == 0
-
- if g_regularize:
- path_batch_size = max(1, args.batch // args.path_batch_shrink)
- noise = mixing_noise(path_batch_size, args.latent, args.mixing, device)
- fake_img, latents = generator(noise, return_latents=True)
-
- path_loss, mean_path_length, path_lengths = g_path_regularize(
- fake_img, latents, mean_path_length
- )
-
- generator.zero_grad()
- weighted_path_loss = args.path_regularize * args.g_reg_every * path_loss
-
- if args.path_batch_shrink:
- weighted_path_loss += 0 * fake_img[0, 0, 0, 0]
-
- weighted_path_loss.backward()
-
- g_optim.step()
-
- mean_path_length_avg = (
- reduce_sum(mean_path_length).item() / get_world_size()
- )
-
- loss_dict["path"] = path_loss
- loss_dict["path_length"] = path_lengths.mean()
-
- accumulate(g_ema, g_module, accum)
-
- loss_reduced = reduce_loss_dict(loss_dict)
-
- d_loss_val = loss_reduced["d"].mean().item()
- g_loss_val = loss_reduced["g"].mean().item()
- r1_val = loss_reduced["r1"].mean().item()
- path_loss_val = loss_reduced["path"].mean().item()
- real_score_val = loss_reduced["real_score"].mean().item()
- fake_score_val = loss_reduced["fake_score"].mean().item()
- path_length_val = loss_reduced["path_length"].mean().item()
-
- if get_rank() == 0:
- pbar.set_description(
- (
- f"d: {d_loss_val:.4f}; g: {g_loss_val:.4f}; r1: {r1_val:.4f}; "
- f"path: {path_loss_val:.4f}; mean path: {mean_path_length_avg:.4f}"
- )
- )
-
- if wandb and args.wandb:
- wandb.log(
- {
- "Generator": g_loss_val,
- "Discriminator": d_loss_val,
- "R1": r1_val,
- "Path Length Regularization": path_loss_val,
- "Mean Path Length": mean_path_length,
- "Real Score": real_score_val,
- "Fake Score": fake_score_val,
- "Path Length": path_length_val,
- }
- )
-
- if i % 100 == 0:
- with torch.no_grad():
- g_ema.eval()
- sample, _ = g_ema([sample_z])
- utils.save_image(
- sample,
- f"sample/{str(i).zfill(6)}.png",
- nrow=int(args.n_sample ** 0.5),
- normalize=True,
- range=(-1, 1),
- )
-
- if i % 10000 == 0:
- torch.save(
- {
- "g": g_module.state_dict(),
- "d": d_module.state_dict(),
- "g_ema": g_ema.state_dict(),
- "g_optim": g_optim.state_dict(),
- "d_optim": d_optim.state_dict(),
- },
- f"checkpoint/{str(i).zfill(6)}.pt",
- )
-
-
-if __name__ == "__main__":
- device = "cuda"
-
- parser = argparse.ArgumentParser()
-
- parser.add_argument("path", type=str)
- parser.add_argument("--iter", type=int, default=800000)
- parser.add_argument("--batch", type=int, default=16)
- parser.add_argument("--n_sample", type=int, default=64)
- parser.add_argument("--size", type=int, default=256)
- parser.add_argument("--r1", type=float, default=10)
- parser.add_argument("--path_regularize", type=float, default=2)
- parser.add_argument("--path_batch_shrink", type=int, default=2)
- parser.add_argument("--d_reg_every", type=int, default=16)
- parser.add_argument("--g_reg_every", type=int, default=4)
- parser.add_argument("--mixing", type=float, default=0.9)
- parser.add_argument("--ckpt", type=str, default=None)
- parser.add_argument("--lr", type=float, default=0.002)
- parser.add_argument("--channel_multiplier", type=int, default=2)
- parser.add_argument("--wandb", action="store_true")
- parser.add_argument("--local_rank", type=int, default=0)
-
- args = parser.parse_args()
-
- n_gpu = int(os.environ["WORLD_SIZE"]) if "WORLD_SIZE" in os.environ else 1
- args.distributed = n_gpu > 1
-
- if args.distributed:
- torch.cuda.set_device(args.local_rank)
- torch.distributed.init_process_group(backend="nccl", init_method="env://")
- synchronize()
-
- args.latent = 512
- args.n_mlp = 8
-
- args.start_iter = 0
-
- generator = Generator(
- args.size, args.latent, args.n_mlp, channel_multiplier=args.channel_multiplier
- ).to(device)
- discriminator = Discriminator(
- args.size, channel_multiplier=args.channel_multiplier
- ).to(device)
- g_ema = Generator(
- args.size, args.latent, args.n_mlp, channel_multiplier=args.channel_multiplier
- ).to(device)
- g_ema.eval()
- accumulate(g_ema, generator, 0)
-
- g_reg_ratio = args.g_reg_every / (args.g_reg_every + 1)
- d_reg_ratio = args.d_reg_every / (args.d_reg_every + 1)
-
- g_optim = optim.Adam(
- generator.parameters(),
- lr=args.lr * g_reg_ratio,
- betas=(0 ** g_reg_ratio, 0.99 ** g_reg_ratio),
- )
- d_optim = optim.Adam(
- discriminator.parameters(),
- lr=args.lr * d_reg_ratio,
- betas=(0 ** d_reg_ratio, 0.99 ** d_reg_ratio),
- )
-
- if args.ckpt is not None:
- print("load model:", args.ckpt)
-
- ckpt = torch.load(args.ckpt, map_location=lambda storage, loc: storage)
-
- try:
- ckpt_name = os.path.basename(args.ckpt)
- args.start_iter = int(os.path.splitext(ckpt_name)[0])
-
- except ValueError:
- pass
-
- generator.load_state_dict(ckpt["g"])
- discriminator.load_state_dict(ckpt["d"])
- g_ema.load_state_dict(ckpt["g_ema"])
-
- g_optim.load_state_dict(ckpt["g_optim"])
- d_optim.load_state_dict(ckpt["d_optim"])
-
- if args.distributed:
- generator = nn.parallel.DistributedDataParallel(
- generator,
- device_ids=[args.local_rank],
- output_device=args.local_rank,
- broadcast_buffers=False,
- )
-
- discriminator = nn.parallel.DistributedDataParallel(
- discriminator,
- device_ids=[args.local_rank],
- output_device=args.local_rank,
- broadcast_buffers=False,
- )
-
- transform = transforms.Compose(
- [
- transforms.RandomHorizontalFlip(),
- transforms.ToTensor(),
- transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True),
- ]
- )
-
- dataset = MultiResolutionDataset(args.path, transform, args.size)
- loader = data.DataLoader(
- dataset,
- batch_size=args.batch,
- sampler=data_sampler(dataset, shuffle=True, distributed=args.distributed),
- drop_last=True,
- )
-
- if get_rank() == 0 and wandb is not None and args.wandb:
- wandb.init(project="stylegan 2")
-
- train(args, loader, generator, discriminator, g_optim, d_optim, g_ema, device)
diff --git a/spaces/mikeee/nousresearch-nous-hermes-llama2-13b-ggml/README.md b/spaces/mikeee/nousresearch-nous-hermes-llama2-13b-ggml/README.md
deleted file mode 100644
index 376fe46f50dbbbdfd3479875bb70be37f07d81dd..0000000000000000000000000000000000000000
--- a/spaces/mikeee/nousresearch-nous-hermes-llama2-13b-ggml/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: nous-hermes-llama2-13b-ggml
-emoji: 🚀
-colorFrom: green
-colorTo: green
-sdk: gradio
-sdk_version: 3.37.0
-app_file: app.py
-pinned: true
-duplicated_from: mikeee/llama2-7b-chat-uncensored-ggml
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/mikeee/radiobee-aligner/radiobee/text2lists.py b/spaces/mikeee/radiobee-aligner/radiobee/text2lists.py
deleted file mode 100644
index f590f4778389d3d0476c107ae44c84f8b124d9b4..0000000000000000000000000000000000000000
--- a/spaces/mikeee/radiobee-aligner/radiobee/text2lists.py
+++ /dev/null
@@ -1,153 +0,0 @@
-"""Separate text to zh en lists."""
-# pylint: disable=unused-import, too-many-locals, invalid-name, too-many-branches, too-many-statements,
-
-
-# from typing import Tuple,
-from typing import Iterable, List, Optional, Tuple, Union # noqa
-
-import numpy as np
-
-# from fastlid import fastlid
-from polyglot.text import Detector
-from logzero import logger
-
-from radiobee.lists2cmat import lists2cmat
-from radiobee.detect import detect
-
-
-def text2lists(
- text: Union[Iterable[str], str],
- set_languages: Optional[List[str]] = None,
-) -> Tuple[List[str], List[str]]:
- """Separate text to zh en lists.
-
- Args:
- text: mixed text
- set_languages: no default (open-end)
- use polyglot.text.Detector to pick two languages
-
- Attributes:
- cmat: correlation matrix (len(list_l) x len(list_r))
- before adjusting (shifting)
- offset: plus, [""] * offset + list2
- minus, [""] * (-offset) + list1
- Returns:
- two lists, best effort alignment
- """
- if not isinstance(text, str) and isinstance(text, Iterable):
- try:
- text = "\n".join(text)
- except Exception as e:
- logger.error(e)
- raise
-
- # set_languages default to ["en", "zh"]
- if set_languages is None:
- lang12 = [elm.code for elm in Detector(text).languages]
-
- # set_languages = ["en", "zh"]
-
- # set 'un' to 'en'
- # set_languages = ['en' if elm in ['un'] else elm for elm in lang12[:2]]
- set_languages = []
- for elm in lang12[:2]:
- if elm in ["un"]:
- logger.warning(" Unknown language, set to en")
- set_languages.append("en")
- else:
- set_languages.append(elm)
-
- # fastlid.set_languages = set_languages
-
- list1 = []
- list2 = []
-
- # lang0, _ = fastlid(text[:15000])
- lang0 = detect(text, set_languages)
-
- res = []
- left = True # start with left list1
-
- for elm in [_ for _ in text.splitlines() if _.strip()]:
- # lang, _ = fastlid(elm)
- lang = detect(elm, set_languages)
- if lang == lang0:
- res.append(elm)
- else:
- if left:
- # list1.append("\n".join(res))
- list1.extend(res)
- else:
- # list2.append("\n".join(res))
- list2.extend(res)
- left = not left
-
- res = [elm]
- lang0 = lang
-
- # process the last
- if left:
- list1.extend(res)
- else:
- list2.extend(res)
-
- try:
- # lang1, _ = fastlid(' '.join(list1))
- lang1 = detect(" ".join(list1), set_languages)
- except Exception as exc:
- logger.error(exc)
- lang1 = "en"
- try:
- # lang2, _ = fastlid(' '.join(list2))
- lang2 = detect(" ".join(list2), set_languages)
- except Exception as exc:
- logger.error(exc)
- lang2 = "en"
-
- # find offset via diagonal(k),
- len1, len2 = len(list1), len(list2)
-
- # len2, len1 = cmat.shape
- # len_r, len_c = cmat.shape
- # ylim, xlim = cmat.shape
- ylim, xlim = len2, len1 # check
-
- # cmat dim: len1 x len2 or ylim x xlim
- cmat = lists2cmat(list1, list2, lang1, lang2)
-
- # sq_mean_pair = [(elm, np.square(cmat.diagonal(elm)).mean()) for elm in range(2 - ylim, xlim + 1)]
- # df = pd.DataFrame(sq_mean_pair, columns=['offset', 'sq_mean'])
- # df.plot.scatter('offset', 'sq_mean')
- # optimum_offset = df.offset[df.sq_mean.argmax()]
-
- # equiv to np.argmax(sq_mean) - (ylim - 2)
- # locate max, -ylim + 2 ...xlim: range(1 - ylim, xlim)
- # sqare sum
-
- sq_mean = [np.square(cmat.diagonal(elm)).mean() for elm in range(1 - ylim, xlim - 1)]
- # tot: xlim + ylim - 1
-
- # temp = [np.square(cmat.diagonal(elm)) for elm in range(2 - ylim, xlim + 1)]
- # sq_mean = [elm.mean() if np.any(elm) else 0.0 for elm in temp]
-
- # plt.figure()
- # plt.scatter(range(1 - ylim, xlim), sq_mean)
-
- offset = np.argmax(sq_mean) - (ylim - 1)
-
- text2lists.cmat = cmat
- text2lists.offset = offset
- text2lists.lang1 = lang1
- text2lists.lang2 = lang2
-
- # shift list1 if offsset >= 0, else shift list2
- if offset > -1:
- # list1a = list1[:]
- # list2a = [""] * offset + list2
- list2 = [""] * offset + list2
- else:
- list1 = [""] * (-offset) + list1
- # list1a = [""] * (-offset) + list1
- # list2a = list2[:]
-
- return list1, list2
diff --git a/spaces/mmlab-ntu/Segment-Any-RGBD/datasets/DATASETS.md b/spaces/mmlab-ntu/Segment-Any-RGBD/datasets/DATASETS.md
deleted file mode 100644
index 30d30ba314c9842098c5c38d0a47ce780283d9d9..0000000000000000000000000000000000000000
--- a/spaces/mmlab-ntu/Segment-Any-RGBD/datasets/DATASETS.md
+++ /dev/null
@@ -1,122 +0,0 @@
-## Prepare Datasets for OVSeg
-
-This doc is a modification/extension of [MaskFormer](https://github.com/facebookresearch/MaskFormer/blob/main/datasets/README.md) following [Detectron2 fromat](https://detectron2.readthedocs.io/en/latest/tutorials/datasets.html).
-
-A dataset can be used by accessing [DatasetCatalog](https://detectron2.readthedocs.io/modules/data.html#detectron2.data.DatasetCatalog)
-for its data, or [MetadataCatalog](https://detectron2.readthedocs.io/modules/data.html#detectron2.data.MetadataCatalog) for its metadata (class names, etc).
-This document explains how to setup the builtin datasets so they can be used by the above APIs.
-[Use Custom Datasets](https://detectron2.readthedocs.io/tutorials/datasets.html) gives a deeper dive on how to use `DatasetCatalog` and `MetadataCatalog`,
-and how to add new datasets to them.
-
-OVSeg has builtin support for a few datasets.
-The datasets are assumed to exist in a directory specified by the environment variable
-`DETECTRON2_DATASETS`.
-Under this directory, detectron2 will look for datasets in the structure described below, if needed.
-```
-$DETECTRON2_DATASETS/
- coco/ # COCOStuff-171
- ADEChallengeData2016/ # ADE20K-150
- ADE20K_2021_17_01/ # ADE20K-847
- VOCdevkit/
- VOC2012/ # PASCALVOC-20
- VOC2010/ # PASCALContext-59, PASCALContext-459
-```
-
-You can set the location for builtin datasets by `export DETECTRON2_DATASETS=/path/to/datasets`.
-If left unset, the default is `./datasets` relative to your current working directory.
-
-Without specific notifications, our model is trained on COCOStuff-171 and evlauted on ADE20K-150, ADE20K-847, PASCALVOC-20, PASCALContext-59 and PASCALContext-459.
-
-| dataset | split | # images | # categories |
-|:--------------:|:---------:|:--------:|:------------:|
-| COCO Stuff | train2017 | 118K | 171 |
-| ADE20K | val | 2K | 150/847 |
-| Pascal VOC | val | 1.5K | 20 |
-| Pascal Context | val | 5K | 59/459 |
-
-
-### Expected dataset structure for [COCO Stuff](https://github.com/nightrome/cocostuff):
-```
-coco/
- train2017/ # http://images.cocodataset.org/zips/train2017.zip
- annotations/ # http://images.cocodataset.org/annotations/annotations_trainval2017.zip
- stuffthingmaps/
- stuffthingmaps_trainval2017.zip # http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/stuffthingmaps_trainval2017.zip
- train2017/
- # below are generated
- stuffthingmaps_detectron2/
- train2017/
-```
-
-The directory `stuffthingmaps_detectron2` is generated by running `python datasets/prepare_coco_stuff_sem_seg.py`.
-
-
-
-### Expected dataset structure for [ADE20k Scene Parsing (ADE20K-150)](http://sceneparsing.csail.mit.edu/):
-```
-ADEChallengeData2016/
- annotations/
- images/
- objectInfo150.txt
- # below are generated
- annotations_detectron2/
-```
-The directory `annotations_detectron2` is generated by running `python datasets/prepare_ade20k_sem_seg.py`.
-
-
-### Expected dataset structure for [ADE20k-Full (ADE20K-847)](https://github.com/CSAILVision/ADE20K#download):
-```
-ADE20K_2021_17_01/
- images/
- index_ade20k.pkl
- objects.txt
- # below are generated
- images_detectron2/
- annotations_detectron2/
-```
-The directories `images_detectron2` and `annotations_detectron2` are generated by running `python datasets/prepare_ade20k_full_sem_seg.py`.
-
-### Expected dataset structure for [Pascal VOC 2012 (PASCALVOC-20)](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/#devkit):
-```
-VOCdevkit/VOC2012/
- Annotations/
- ImageSets/
- JPEGImages/
- SegmentationClass/
- SegmentationObject/
- SegmentationClassAug/ # https://github.com/kazuto1011/deeplab-pytorch/blob/master/data/datasets/voc12/README.md
- # below are generated
- images_detectron2/
- annotations_detectron2/
-```
-
-It starts with a tar file `VOCtrainval_11-May-2012.tar`.
-
-We use SBD augmentated training data as `SegmentationClassAug` following [Deeplab](https://github.com/kazuto1011/deeplab-pytorch/blob/master/data/datasets/voc12/README.md)
-
-The directories `images_detectron2` and `annotations_detectron2` are generated by running `python datasets/prepare_voc_sem_seg.py`.
-
-
-### Expected dataset structure for [Pascal Context](https://www.cs.stanford.edu/~roozbeh/pascal-context/):
-
-```
-VOCdevkit/VOC2010/
- Annotations/
- ImageSets/
- JPEGImages/
- SegmentationClass/
- SegmentationObject/
- # below are from https://www.cs.stanford.edu/~roozbeh/pascal-context/trainval.tar.gz
- trainval/
- labels.txt
- 59_labels.txt # https://www.cs.stanford.edu/~roozbeh/pascal-context/59_labels.txt
- pascalcontext_val.txt # https://drive.google.com/file/d/1BCbiOKtLvozjVnlTJX51koIveUZHCcUh/view?usp=sharing
- # below are generated
- annotations_detectron2/
- pc459_val
- pc59_val
-```
-It starts with a tar file `VOCtrainval_03-May-2010.tar`. You may want to download the 5K validation set [here](https://drive.google.com/file/d/1BCbiOKtLvozjVnlTJX51koIveUZHCcUh/view?usp=sharing).
-
-The directory `annotations_detectron2` is generated by running `python datasets/prepare_pascal_context.py`.
-
diff --git a/spaces/momegas/megabots/README.md b/spaces/momegas/megabots/README.md
deleted file mode 100644
index 270063909b10a4672bc588f193574687c3da7669..0000000000000000000000000000000000000000
--- a/spaces/momegas/megabots/README.md
+++ /dev/null
@@ -1,266 +0,0 @@
----
-title: 🤖 Megabots
-emoji: 🤖
-colorFrom: yellow
-colorTo: green
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
-license: mit
-python_version: 3.10.0
----
-
-# 🤖 Megabots
-
-[](https://github.com/momegas/qnabot/actions/workflows/python-package.yml)
-[](#supported-python-versions)
-[](https://github.com/psf/black)
-[](https://github.com/momegas/megabots/blob/main/LICENCE)
-
-
-🤖 Megabots provides State-of-the-art, production ready LLM apps made mega-easy, so you don't have to build them from scratch 🤯 Create a bot, now 🫵
-
-- 👉 Join us on Discord: https://discord.gg/zkqDWk5S7P
-- ✈️ Work is managed in this project: https://github.com/users/momegas/projects/5/views/2
-- 🤖 Documentation bot: https://huggingface.co/spaces/momegas/megabots
-
-**The Megabots library can be used to create bots that:**
-
-- ⌚️ are production ready, in minutes
-- 🗂️ can answer questions over documents
-- 💾 can connect to vector databases
-- 🎖️ automatically expose the bot as a rebust API using FastAPI (early release)
-- 🏓 automatically expose the bot as a UI using Gradio
-
-**Coming soon:**
-
-- 🗣️ accept voice as an input using [whisper](https://github.com/openai/whisper)
-- 👍 validate and correct the outputs of LLMs using [guardrails](https://github.com/ShreyaR/guardrails)
-- 💰 semanticly cache LLM Queries and reduce Costs by 10x using [GPTCache](https://github.com/zilliztech/GPTCache)
-- 🏋️ mega-easy LLM training
-- 🚀 mega-easy deployment
-
-🤖 Megabots is backed by some of the most famous tools for productionalising AI. It uses [LangChain](https://docs.langchain.com/docs/) for managing LLM chains, [FastAPI](https://fastapi.tiangolo.com/) to create a production ready API, [Gradio](https://gradio.app/) to create a UI. At the moment it uses [OpenAI](https://openai.com/) to generate answers, but we plan to support other LLMs in the future.
-
-## Getting started
-
-Note: This is a work in progress. The API might change.
-
-```bash
-pip install megabots
-```
-
-```python
-from megabots import bot
-import os
-
-os.environ["OPENAI_API_KEY"] = "my key"
-
-# Create a bot 👉 with one line of code. Automatically loads your data from ./index or index.pkl.
-# Keep in mind that you need to have one or another.
-qnabot = bot("qna-over-docs")
-
-# Ask a question
-answer = bot.ask("How do I use this bot?")
-
-# Save the index to save costs (GPT is used to create the index)
-bot.save_index("index.pkl")
-
-# Load the index from a previous run
-qnabot = bot("qna-over-docs", index="./index.pkl")
-
-# Or create the index from a directory of documents
-qnabot = bot("qna-over-docs", index="./index")
-
-# Change the model
-qnabot = bot("qna-over-docs", model="text-davinci-003")
-```
-
-## Changing the bot's prompt
-
-You can change the bots promnpt to customize it to your needs. In the `qna-over-docs` type of bot you will need to pass 2 variables for the `context` (knwoledge searched from the index) and the `question` (the human question).
-
-```python
-from megabots import bot
-
-prompt = """
-Use the following pieces of context to answer the question at the end.
-If you don't know the answer, just say that you don't know, don't try to make up an answer.
-Answer in the style of Tony Stark.
-
-{context}
-
-Question: {question}
-Helpful humorous answer:"""
-
-qnabot = bot("qna-over-docs", index="./index.pkl", prompt=prompt)
-
-qnabot.ask("what was the first roster of the avengers?")
-```
-
-## Working with memory
-
-You can easily add memory to your `bot` using the `memory` parameter. It accepts a string with the type of the memory to be used. This defaults to some sane dafaults.
-Should you need more configuration, you can use the `memory` function and pass the type of memory and the configuration you need.
-
-```python
-from megabots import bot
-
-qnabot = bot("qna-over-docs", index="./index.pkl", memory="conversation-buffer")
-
-print(qnabot.ask("who is iron man?"))
-print(qnabot.ask("was he in the first roster?"))
-# Bot should understand who "he" refers to.
-```
-
-Or using the `memory`factory function
-
-```python
-from megabots import bot, memory
-
-mem("conversation-buffer-window", k=5)
-
-qnabot = bot("qna-over-docs", index="./index.pkl", memory=mem)
-
-print(qnabot.ask("who is iron man?"))
-print(qnabot.ask("was he in the first roster?"))
-```
-
-NOTE: For the `qna-over-docs` bot, when using memory and passing your custom prompt, it is important to remember to pass one more variable to your custom prompt to facilitate for chat history. The variable name is `history`.
-
-```python
-from megabots import bot
-
-prompt = """
-Use the following pieces of context to answer the question at the end.
-If you don't know the answer, just say that you don't know, don't try to make up an answer.
-
-{context}
-
-{history}
-Human: {question}
-AI:"""
-
-qnabot = bot("qna-over-docs", prompt=prompt, index="./index.pkl", memory="conversation-buffer")
-
-print(qnabot.ask("who is iron man?"))
-print(qnabot.ask("was he in the first roster?"))
-```
-
-## Using Megabots with Milvus (more DBs comming soon)
-
-Megabots `bot` can also use Milvus as a backend for its search engine. You can find an example of how to do it below.
-
-In order to run Milvus you need to follow [this guide](https://milvus.io/docs/example_code.md) to download a docker compose file and run it.
-The command is:
-
-```bash
-wget https://raw.githubusercontent.com/milvus-io/pymilvus/v2.2.7/examples/hello_milvus.py
-```
-
-You can then [install Attu](https://milvus.io/docs/attu_install-docker.md) as a management tool for Milvus
-
-```python
-from megabots import bot
-
-# Attach a vectorstore by passing the name of the database. Default port for milvus is 19530 and default host is localhost
-# Point it to your files directory so that it can index the files and add them to the vectorstore
-bot = bot("qna-over-docs", index="./examples/files/", vectorstore="milvus")
-
-bot.ask("what was the first roster of the avengers?")
-```
-
-Or use the `vectorstore` factory function for more customisation
-
-```python
-
-from megabots import bot, vectorstore
-
-milvus = vectorstore("milvus", host="localhost", port=19530)
-
-bot = bot("qna-over-docs", index="./examples/files/", vectorstore=milvus)
-```
-
-## Exposing an API with FastAPI
-
-You can also create a FastAPI app that will expose the bot as an API using the create_app function.
-Assuming you file is called `main.py` run `uvicorn main:app --reload` to run the API locally.
-You should then be able to visit `http://localhost:8000/docs` to see the API documentation.
-
-```python
-from megabots import bot, create_api
-
-app = create_app(bot("qna-over-docs"))
-```
-
-## Exposing a Gradio chat-like interface
-
-You can expose a gradio UI for the bot using `create_interface` function.
-Assuming your file is called `ui.py` run `gradio qnabot/ui.py` to run the UI locally.
-You should then be able to visit `http://127.0.0.1:7860` to see the API documentation.
-
-```python
-from megabots import bot, create_interface
-
-demo = create_interface(bot("qna-over-docs"))
-```
-
-## Customising bot
-
-The `bot` function should serve as the starting point for creating and customising your bot. Below is a list of the available arguments in `bot`.
-
-| Argument | Description |
-| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| task | The type of bot to create. Available options: `qna-over-docs`. More comming soon |
-| index | Specifies the index to use for the bot. It can either be a saved index file (e.g., `index.pkl`) or a directory of documents (e.g., `./index`). In the case of the directory the index will be automatically created. If no index is specified `bot` will look for `index.pkl` or `./index` |
-| model | The name of the model to use for the bot. You can specify a different model by providing its name, like "text-davinci-003". Supported models: `gpt-3.5-turbo` (default),`text-davinci-003` More comming soon. |
-| prompt | A string template for the prompt, which defines the format of the question and context passed to the model. The template should include placeholder variables like so: `context`, `{question}` and in the case of using memory `history`. |
-| memory | The type of memory to be used by the bot. Can be a string with the type of the memory or you can use `memory` factory function. Supported memories: `conversation-buffer`, `conversation-buffer-window` |
-| vectorstore | The vectorstore to be used for the index. Can be a string with the name of the databse or you can use `vectorstore` factory function. Supported DBs: `milvus`. |
-
-| sources | When `sources` is `True` the bot will also include sources in the response. A known [issue](https://github.com/hwchase17/langchain/issues/2858) exists, where if you pass a custom prompt with sources the code breaks. |
-
-## How QnA bot works
-
-Large language models (LLMs) are powerful, but they can't answer questions about documents they haven't seen. If you want to use an LLM to answer questions about documents it was not trained on, you have to give it information about those documents. To solve this, we use "retrieval augmented generation."
-
-In simple terms, when you have a question, you first search for relevant documents. Then, you give the documents and the question to the language model to generate an answer. To make this work, you need your documents in a searchable format (an index). This process involves two main steps: (1) preparing your documents for easy querying, and (2) using the retrieval augmented generation method.
-
-`qna-over-docs` uses FAISS to create an index of documents and GPT to generate answers.
-
-```mermaid
-sequenceDiagram
- actor User
- participant API
- participant LLM
- participant Vectorstore
- participant IngestionEngine
- participant DataLake
- autonumber
-
- Note over API, DataLake: Ingestion phase
- loop Every X time
- IngestionEngine ->> DataLake: Load documents
- DataLake -->> IngestionEngine: Return data
- IngestionEngine -->> IngestionEngine: Split documents and Create embeddings
- IngestionEngine ->> Vectorstore: Store documents and embeddings
- end
-
- Note over API, DataLake: Generation phase
-
- User ->> API: Receive user question
- API ->> Vectorstore: Lookup documents in the index relevant to the question
- API ->> API: Construct a prompt from the question and any relevant documents
- API ->> LLM: Pass the prompt to the model
- LLM -->> API: Get response from model
- API -->> User: Return response
-
-```
-
-## How to contribute?
-
-We welcome any suggestions, problem reports, and contributions!
-For any changes you would like to make to this project, we invite you to submit an [issue](https://github.com/momegas/megabots/issues).
-
-For more information, see [`CONTRIBUTING`](https://github.com/momegas/megabots/blob/main/CONTRIBUTING.md) instructions.
diff --git a/spaces/mshkdm/VToonify/vtoonify/model/stylegan/lpips/pretrained_networks.py b/spaces/mshkdm/VToonify/vtoonify/model/stylegan/lpips/pretrained_networks.py
deleted file mode 100644
index 077a24419364fdb5ae2f697f73e28615adae75a7..0000000000000000000000000000000000000000
--- a/spaces/mshkdm/VToonify/vtoonify/model/stylegan/lpips/pretrained_networks.py
+++ /dev/null
@@ -1,181 +0,0 @@
-from collections import namedtuple
-import torch
-from torchvision import models as tv
-from IPython import embed
-
-class squeezenet(torch.nn.Module):
- def __init__(self, requires_grad=False, pretrained=True):
- super(squeezenet, self).__init__()
- pretrained_features = tv.squeezenet1_1(pretrained=pretrained).features
- self.slice1 = torch.nn.Sequential()
- self.slice2 = torch.nn.Sequential()
- self.slice3 = torch.nn.Sequential()
- self.slice4 = torch.nn.Sequential()
- self.slice5 = torch.nn.Sequential()
- self.slice6 = torch.nn.Sequential()
- self.slice7 = torch.nn.Sequential()
- self.N_slices = 7
- for x in range(2):
- self.slice1.add_module(str(x), pretrained_features[x])
- for x in range(2,5):
- self.slice2.add_module(str(x), pretrained_features[x])
- for x in range(5, 8):
- self.slice3.add_module(str(x), pretrained_features[x])
- for x in range(8, 10):
- self.slice4.add_module(str(x), pretrained_features[x])
- for x in range(10, 11):
- self.slice5.add_module(str(x), pretrained_features[x])
- for x in range(11, 12):
- self.slice6.add_module(str(x), pretrained_features[x])
- for x in range(12, 13):
- self.slice7.add_module(str(x), pretrained_features[x])
- if not requires_grad:
- for param in self.parameters():
- param.requires_grad = False
-
- def forward(self, X):
- h = self.slice1(X)
- h_relu1 = h
- h = self.slice2(h)
- h_relu2 = h
- h = self.slice3(h)
- h_relu3 = h
- h = self.slice4(h)
- h_relu4 = h
- h = self.slice5(h)
- h_relu5 = h
- h = self.slice6(h)
- h_relu6 = h
- h = self.slice7(h)
- h_relu7 = h
- vgg_outputs = namedtuple("SqueezeOutputs", ['relu1','relu2','relu3','relu4','relu5','relu6','relu7'])
- out = vgg_outputs(h_relu1,h_relu2,h_relu3,h_relu4,h_relu5,h_relu6,h_relu7)
-
- return out
-
-
-class alexnet(torch.nn.Module):
- def __init__(self, requires_grad=False, pretrained=True):
- super(alexnet, self).__init__()
- alexnet_pretrained_features = tv.alexnet(pretrained=pretrained).features
- self.slice1 = torch.nn.Sequential()
- self.slice2 = torch.nn.Sequential()
- self.slice3 = torch.nn.Sequential()
- self.slice4 = torch.nn.Sequential()
- self.slice5 = torch.nn.Sequential()
- self.N_slices = 5
- for x in range(2):
- self.slice1.add_module(str(x), alexnet_pretrained_features[x])
- for x in range(2, 5):
- self.slice2.add_module(str(x), alexnet_pretrained_features[x])
- for x in range(5, 8):
- self.slice3.add_module(str(x), alexnet_pretrained_features[x])
- for x in range(8, 10):
- self.slice4.add_module(str(x), alexnet_pretrained_features[x])
- for x in range(10, 12):
- self.slice5.add_module(str(x), alexnet_pretrained_features[x])
- if not requires_grad:
- for param in self.parameters():
- param.requires_grad = False
-
- def forward(self, X):
- h = self.slice1(X)
- h_relu1 = h
- h = self.slice2(h)
- h_relu2 = h
- h = self.slice3(h)
- h_relu3 = h
- h = self.slice4(h)
- h_relu4 = h
- h = self.slice5(h)
- h_relu5 = h
- alexnet_outputs = namedtuple("AlexnetOutputs", ['relu1', 'relu2', 'relu3', 'relu4', 'relu5'])
- out = alexnet_outputs(h_relu1, h_relu2, h_relu3, h_relu4, h_relu5)
-
- return out
-
-class vgg16(torch.nn.Module):
- def __init__(self, requires_grad=False, pretrained=True):
- super(vgg16, self).__init__()
- vgg_pretrained_features = tv.vgg16(pretrained=pretrained).features
- self.slice1 = torch.nn.Sequential()
- self.slice2 = torch.nn.Sequential()
- self.slice3 = torch.nn.Sequential()
- self.slice4 = torch.nn.Sequential()
- self.slice5 = torch.nn.Sequential()
- self.N_slices = 5
- for x in range(4):
- self.slice1.add_module(str(x), vgg_pretrained_features[x])
- for x in range(4, 9):
- self.slice2.add_module(str(x), vgg_pretrained_features[x])
- for x in range(9, 16):
- self.slice3.add_module(str(x), vgg_pretrained_features[x])
- for x in range(16, 23):
- self.slice4.add_module(str(x), vgg_pretrained_features[x])
- for x in range(23, 30):
- self.slice5.add_module(str(x), vgg_pretrained_features[x])
- if not requires_grad:
- for param in self.parameters():
- param.requires_grad = False
-
- def forward(self, X):
- h = self.slice1(X)
- h_relu1_2 = h
- h = self.slice2(h)
- h_relu2_2 = h
- h = self.slice3(h)
- h_relu3_3 = h
- h = self.slice4(h)
- h_relu4_3 = h
- h = self.slice5(h)
- h_relu5_3 = h
- vgg_outputs = namedtuple("VggOutputs", ['relu1_2', 'relu2_2', 'relu3_3', 'relu4_3', 'relu5_3'])
- out = vgg_outputs(h_relu1_2, h_relu2_2, h_relu3_3, h_relu4_3, h_relu5_3)
-
- return out
-
-
-
-class resnet(torch.nn.Module):
- def __init__(self, requires_grad=False, pretrained=True, num=18):
- super(resnet, self).__init__()
- if(num==18):
- self.net = tv.resnet18(pretrained=pretrained)
- elif(num==34):
- self.net = tv.resnet34(pretrained=pretrained)
- elif(num==50):
- self.net = tv.resnet50(pretrained=pretrained)
- elif(num==101):
- self.net = tv.resnet101(pretrained=pretrained)
- elif(num==152):
- self.net = tv.resnet152(pretrained=pretrained)
- self.N_slices = 5
-
- self.conv1 = self.net.conv1
- self.bn1 = self.net.bn1
- self.relu = self.net.relu
- self.maxpool = self.net.maxpool
- self.layer1 = self.net.layer1
- self.layer2 = self.net.layer2
- self.layer3 = self.net.layer3
- self.layer4 = self.net.layer4
-
- def forward(self, X):
- h = self.conv1(X)
- h = self.bn1(h)
- h = self.relu(h)
- h_relu1 = h
- h = self.maxpool(h)
- h = self.layer1(h)
- h_conv2 = h
- h = self.layer2(h)
- h_conv3 = h
- h = self.layer3(h)
- h_conv4 = h
- h = self.layer4(h)
- h_conv5 = h
-
- outputs = namedtuple("Outputs", ['relu1','conv2','conv3','conv4','conv5'])
- out = outputs(h_relu1, h_conv2, h_conv3, h_conv4, h_conv5)
-
- return out
diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/linformer/linformer_src/__init__.py b/spaces/mshukor/UnIVAL/fairseq/examples/linformer/linformer_src/__init__.py
deleted file mode 100644
index 1c52f135ea6f99d0effe8ce1f7d77cbd66be3745..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/examples/linformer/linformer_src/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from .models import linformer_roberta # noqa
diff --git a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/ratatouille/scaling_best/caption/caption_stage_1_ofaplus_base_pretrain_s2_hsep1_bs16_shuf_initrefcocoplus.sh b/spaces/mshukor/UnIVAL/slurm_adastra/averaging/ratatouille/scaling_best/caption/caption_stage_1_ofaplus_base_pretrain_s2_hsep1_bs16_shuf_initrefcocoplus.sh
deleted file mode 100644
index 1f42492ba7e12735c8743756c564f25f56052592..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/ratatouille/scaling_best/caption/caption_stage_1_ofaplus_base_pretrain_s2_hsep1_bs16_shuf_initrefcocoplus.sh
+++ /dev/null
@@ -1,30 +0,0 @@
-#!/bin/bash
-
-#SBATCH --job-name=caption_stage_1_ofaplus_base_pretrain_s2_hsep1_bs16_shuf_initrefcocoplus
-#SBATCH --nodes=1
-#SBATCH --ntasks=1
-#SBATCH --gpus=8
-#SBATCH --threads-per-core=2
-#SBATCH --gpu-bind=closest
-####SBATCH --nodelist=x1004c4s2b0n0
-#SBATCH --time=24:00:00
-#SBATCH -C MI250
-#SBATCH -A gda2204
-#SBATCH --mail-type=END,FAIL
-#SBATCH --output=/lus/home/NAT/gda2204/mshukor/logs/slurm/caption_stage_1_ofaplus_base_pretrain_s2_hsep1_bs16_shuf_initrefcocoplus.out
-#SBATCH --exclusive
-#SBATCH --mail-user=mustafa.shukor@isir.upmc.fr
-
-
-cd /lus/home/NAT/gda2204/mshukor/code/ofa_ours/run_scripts
-source /lus/home/NAT/gda2204/mshukor/.bashrc
-
-conda activate main
-
-
-rm core-python3*
-
-
-srun -l -N 1 -n 1 -c 128 --gpus=8 bash averaging/ratatouille/scaling_best/caption/caption_stage_1_ofaplus_base_pretrain_s2_hsep1_bs16_shuf_initrefcocoplus.sh
-
-
diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/masks/mask.py b/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/masks/mask.py
deleted file mode 100644
index 3e34d0675a781fba983cb542f18390255aaf2609..0000000000000000000000000000000000000000
--- a/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/masks/mask.py
+++ /dev/null
@@ -1,429 +0,0 @@
-import enum
-from copy import deepcopy
-
-import numpy as np
-from skimage import img_as_ubyte
-from skimage.transform import rescale, resize
-try:
- from detectron2 import model_zoo
- from detectron2.config import get_cfg
- from detectron2.engine import DefaultPredictor
- DETECTRON_INSTALLED = True
-except:
- print("Detectron v2 is not installed")
- DETECTRON_INSTALLED = False
-
-from .countless.countless2d import zero_corrected_countless
-
-
-class ObjectMask():
- def __init__(self, mask):
- self.height, self.width = mask.shape
- (self.up, self.down), (self.left, self.right) = self._get_limits(mask)
- self.mask = mask[self.up:self.down, self.left:self.right].copy()
-
- @staticmethod
- def _get_limits(mask):
- def indicator_limits(indicator):
- lower = indicator.argmax()
- upper = len(indicator) - indicator[::-1].argmax()
- return lower, upper
-
- vertical_indicator = mask.any(axis=1)
- vertical_limits = indicator_limits(vertical_indicator)
-
- horizontal_indicator = mask.any(axis=0)
- horizontal_limits = indicator_limits(horizontal_indicator)
-
- return vertical_limits, horizontal_limits
-
- def _clean(self):
- self.up, self.down, self.left, self.right = 0, 0, 0, 0
- self.mask = np.empty((0, 0))
-
- def horizontal_flip(self, inplace=False):
- if not inplace:
- flipped = deepcopy(self)
- return flipped.horizontal_flip(inplace=True)
-
- self.mask = self.mask[:, ::-1]
- return self
-
- def vertical_flip(self, inplace=False):
- if not inplace:
- flipped = deepcopy(self)
- return flipped.vertical_flip(inplace=True)
-
- self.mask = self.mask[::-1, :]
- return self
-
- def image_center(self):
- y_center = self.up + (self.down - self.up) / 2
- x_center = self.left + (self.right - self.left) / 2
- return y_center, x_center
-
- def rescale(self, scaling_factor, inplace=False):
- if not inplace:
- scaled = deepcopy(self)
- return scaled.rescale(scaling_factor, inplace=True)
-
- scaled_mask = rescale(self.mask.astype(float), scaling_factor, order=0) > 0.5
- (up, down), (left, right) = self._get_limits(scaled_mask)
- self.mask = scaled_mask[up:down, left:right]
-
- y_center, x_center = self.image_center()
- mask_height, mask_width = self.mask.shape
- self.up = int(round(y_center - mask_height / 2))
- self.down = self.up + mask_height
- self.left = int(round(x_center - mask_width / 2))
- self.right = self.left + mask_width
- return self
-
- def crop_to_canvas(self, vertical=True, horizontal=True, inplace=False):
- if not inplace:
- cropped = deepcopy(self)
- cropped.crop_to_canvas(vertical=vertical, horizontal=horizontal, inplace=True)
- return cropped
-
- if vertical:
- if self.up >= self.height or self.down <= 0:
- self._clean()
- else:
- cut_up, cut_down = max(-self.up, 0), max(self.down - self.height, 0)
- if cut_up != 0:
- self.mask = self.mask[cut_up:]
- self.up = 0
- if cut_down != 0:
- self.mask = self.mask[:-cut_down]
- self.down = self.height
-
- if horizontal:
- if self.left >= self.width or self.right <= 0:
- self._clean()
- else:
- cut_left, cut_right = max(-self.left, 0), max(self.right - self.width, 0)
- if cut_left != 0:
- self.mask = self.mask[:, cut_left:]
- self.left = 0
- if cut_right != 0:
- self.mask = self.mask[:, :-cut_right]
- self.right = self.width
-
- return self
-
- def restore_full_mask(self, allow_crop=False):
- cropped = self.crop_to_canvas(inplace=allow_crop)
- mask = np.zeros((cropped.height, cropped.width), dtype=bool)
- mask[cropped.up:cropped.down, cropped.left:cropped.right] = cropped.mask
- return mask
-
- def shift(self, vertical=0, horizontal=0, inplace=False):
- if not inplace:
- shifted = deepcopy(self)
- return shifted.shift(vertical=vertical, horizontal=horizontal, inplace=True)
-
- self.up += vertical
- self.down += vertical
- self.left += horizontal
- self.right += horizontal
- return self
-
- def area(self):
- return self.mask.sum()
-
-
-class RigidnessMode(enum.Enum):
- soft = 0
- rigid = 1
-
-
-class SegmentationMask:
- def __init__(self, confidence_threshold=0.5, rigidness_mode=RigidnessMode.rigid,
- max_object_area=0.3, min_mask_area=0.02, downsample_levels=6, num_variants_per_mask=4,
- max_mask_intersection=0.5, max_foreground_coverage=0.5, max_foreground_intersection=0.5,
- max_hidden_area=0.2, max_scale_change=0.25, horizontal_flip=True,
- max_vertical_shift=0.1, position_shuffle=True):
- """
- :param confidence_threshold: float; threshold for confidence of the panoptic segmentator to allow for
- the instance.
- :param rigidness_mode: RigidnessMode object
- when soft, checks intersection only with the object from which the mask_object was produced
- when rigid, checks intersection with any foreground class object
- :param max_object_area: float; allowed upper bound for to be considered as mask_object.
- :param min_mask_area: float; lower bound for mask to be considered valid
- :param downsample_levels: int; defines width of the resized segmentation to obtain shifted masks;
- :param num_variants_per_mask: int; maximal number of the masks for the same object;
- :param max_mask_intersection: float; maximum allowed area fraction of intersection for 2 masks
- produced by horizontal shift of the same mask_object; higher value -> more diversity
- :param max_foreground_coverage: float; maximum allowed area fraction of intersection for foreground object to be
- covered by mask; lower value -> less the objects are covered
- :param max_foreground_intersection: float; maximum allowed area of intersection for the mask with foreground
- object; lower value -> mask is more on the background than on the objects
- :param max_hidden_area: upper bound on part of the object hidden by shifting object outside the screen area;
- :param max_scale_change: allowed scale change for the mask_object;
- :param horizontal_flip: if horizontal flips are allowed;
- :param max_vertical_shift: amount of vertical movement allowed;
- :param position_shuffle: shuffle
- """
-
- assert DETECTRON_INSTALLED, 'Cannot use SegmentationMask without detectron2'
- self.cfg = get_cfg()
- self.cfg.merge_from_file(model_zoo.get_config_file("COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml"))
- self.cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml")
- self.cfg.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = confidence_threshold
- self.predictor = DefaultPredictor(self.cfg)
-
- self.rigidness_mode = RigidnessMode(rigidness_mode)
- self.max_object_area = max_object_area
- self.min_mask_area = min_mask_area
- self.downsample_levels = downsample_levels
- self.num_variants_per_mask = num_variants_per_mask
- self.max_mask_intersection = max_mask_intersection
- self.max_foreground_coverage = max_foreground_coverage
- self.max_foreground_intersection = max_foreground_intersection
- self.max_hidden_area = max_hidden_area
- self.position_shuffle = position_shuffle
-
- self.max_scale_change = max_scale_change
- self.horizontal_flip = horizontal_flip
- self.max_vertical_shift = max_vertical_shift
-
- def get_segmentation(self, img):
- im = img_as_ubyte(img)
- panoptic_seg, segment_info = self.predictor(im)["panoptic_seg"]
- return panoptic_seg, segment_info
-
- @staticmethod
- def _is_power_of_two(n):
- return (n != 0) and (n & (n-1) == 0)
-
- def identify_candidates(self, panoptic_seg, segments_info):
- potential_mask_ids = []
- for segment in segments_info:
- if not segment["isthing"]:
- continue
- mask = (panoptic_seg == segment["id"]).int().detach().cpu().numpy()
- area = mask.sum().item() / np.prod(panoptic_seg.shape)
- if area >= self.max_object_area:
- continue
- potential_mask_ids.append(segment["id"])
- return potential_mask_ids
-
- def downsample_mask(self, mask):
- height, width = mask.shape
- if not (self._is_power_of_two(height) and self._is_power_of_two(width)):
- raise ValueError("Image sides are not power of 2.")
-
- num_iterations = width.bit_length() - 1 - self.downsample_levels
- if num_iterations < 0:
- raise ValueError(f"Width is lower than 2^{self.downsample_levels}.")
-
- if height.bit_length() - 1 < num_iterations:
- raise ValueError("Height is too low to perform downsampling")
-
- downsampled = mask
- for _ in range(num_iterations):
- downsampled = zero_corrected_countless(downsampled)
-
- return downsampled
-
- def _augmentation_params(self):
- scaling_factor = np.random.uniform(1 - self.max_scale_change, 1 + self.max_scale_change)
- if self.horizontal_flip:
- horizontal_flip = bool(np.random.choice(2))
- else:
- horizontal_flip = False
- vertical_shift = np.random.uniform(-self.max_vertical_shift, self.max_vertical_shift)
-
- return {
- "scaling_factor": scaling_factor,
- "horizontal_flip": horizontal_flip,
- "vertical_shift": vertical_shift
- }
-
- def _get_intersection(self, mask_array, mask_object):
- intersection = mask_array[
- mask_object.up:mask_object.down, mask_object.left:mask_object.right
- ] & mask_object.mask
- return intersection
-
- def _check_masks_intersection(self, aug_mask, total_mask_area, prev_masks):
- for existing_mask in prev_masks:
- intersection_area = self._get_intersection(existing_mask, aug_mask).sum()
- intersection_existing = intersection_area / existing_mask.sum()
- intersection_current = 1 - (aug_mask.area() - intersection_area) / total_mask_area
- if (intersection_existing > self.max_mask_intersection) or \
- (intersection_current > self.max_mask_intersection):
- return False
- return True
-
- def _check_foreground_intersection(self, aug_mask, foreground):
- for existing_mask in foreground:
- intersection_area = self._get_intersection(existing_mask, aug_mask).sum()
- intersection_existing = intersection_area / existing_mask.sum()
- if intersection_existing > self.max_foreground_coverage:
- return False
- intersection_mask = intersection_area / aug_mask.area()
- if intersection_mask > self.max_foreground_intersection:
- return False
- return True
-
- def _move_mask(self, mask, foreground):
- # Obtaining properties of the original mask_object:
- orig_mask = ObjectMask(mask)
-
- chosen_masks = []
- chosen_parameters = []
- # to fix the case when resizing gives mask_object consisting only of False
- scaling_factor_lower_bound = 0.
-
- for var_idx in range(self.num_variants_per_mask):
- # Obtaining augmentation parameters and applying them to the downscaled mask_object
- augmentation_params = self._augmentation_params()
- augmentation_params["scaling_factor"] = min([
- augmentation_params["scaling_factor"],
- 2 * min(orig_mask.up, orig_mask.height - orig_mask.down) / orig_mask.height + 1.,
- 2 * min(orig_mask.left, orig_mask.width - orig_mask.right) / orig_mask.width + 1.
- ])
- augmentation_params["scaling_factor"] = max([
- augmentation_params["scaling_factor"], scaling_factor_lower_bound
- ])
-
- aug_mask = deepcopy(orig_mask)
- aug_mask.rescale(augmentation_params["scaling_factor"], inplace=True)
- if augmentation_params["horizontal_flip"]:
- aug_mask.horizontal_flip(inplace=True)
- total_aug_area = aug_mask.area()
- if total_aug_area == 0:
- scaling_factor_lower_bound = 1.
- continue
-
- # Fix if the element vertical shift is too strong and shown area is too small:
- vertical_area = aug_mask.mask.sum(axis=1) / total_aug_area # share of area taken by rows
- # number of rows which are allowed to be hidden from upper and lower parts of image respectively
- max_hidden_up = np.searchsorted(vertical_area.cumsum(), self.max_hidden_area)
- max_hidden_down = np.searchsorted(vertical_area[::-1].cumsum(), self.max_hidden_area)
- # correcting vertical shift, so not too much area will be hidden
- augmentation_params["vertical_shift"] = np.clip(
- augmentation_params["vertical_shift"],
- -(aug_mask.up + max_hidden_up) / aug_mask.height,
- (aug_mask.height - aug_mask.down + max_hidden_down) / aug_mask.height
- )
- # Applying vertical shift:
- vertical_shift = int(round(aug_mask.height * augmentation_params["vertical_shift"]))
- aug_mask.shift(vertical=vertical_shift, inplace=True)
- aug_mask.crop_to_canvas(vertical=True, horizontal=False, inplace=True)
-
- # Choosing horizontal shift:
- max_hidden_area = self.max_hidden_area - (1 - aug_mask.area() / total_aug_area)
- horizontal_area = aug_mask.mask.sum(axis=0) / total_aug_area
- max_hidden_left = np.searchsorted(horizontal_area.cumsum(), max_hidden_area)
- max_hidden_right = np.searchsorted(horizontal_area[::-1].cumsum(), max_hidden_area)
- allowed_shifts = np.arange(-max_hidden_left, aug_mask.width -
- (aug_mask.right - aug_mask.left) + max_hidden_right + 1)
- allowed_shifts = - (aug_mask.left - allowed_shifts)
-
- if self.position_shuffle:
- np.random.shuffle(allowed_shifts)
-
- mask_is_found = False
- for horizontal_shift in allowed_shifts:
- aug_mask_left = deepcopy(aug_mask)
- aug_mask_left.shift(horizontal=horizontal_shift, inplace=True)
- aug_mask_left.crop_to_canvas(inplace=True)
-
- prev_masks = [mask] + chosen_masks
- is_mask_suitable = self._check_masks_intersection(aug_mask_left, total_aug_area, prev_masks) & \
- self._check_foreground_intersection(aug_mask_left, foreground)
- if is_mask_suitable:
- aug_draw = aug_mask_left.restore_full_mask()
- chosen_masks.append(aug_draw)
- augmentation_params["horizontal_shift"] = horizontal_shift / aug_mask_left.width
- chosen_parameters.append(augmentation_params)
- mask_is_found = True
- break
-
- if not mask_is_found:
- break
-
- return chosen_parameters
-
- def _prepare_mask(self, mask):
- height, width = mask.shape
- target_width = width if self._is_power_of_two(width) else (1 << width.bit_length())
- target_height = height if self._is_power_of_two(height) else (1 << height.bit_length())
-
- return resize(mask.astype('float32'), (target_height, target_width), order=0, mode='edge').round().astype('int32')
-
- def get_masks(self, im, return_panoptic=False):
- panoptic_seg, segments_info = self.get_segmentation(im)
- potential_mask_ids = self.identify_candidates(panoptic_seg, segments_info)
-
- panoptic_seg_scaled = self._prepare_mask(panoptic_seg.detach().cpu().numpy())
- downsampled = self.downsample_mask(panoptic_seg_scaled)
- scene_objects = []
- for segment in segments_info:
- if not segment["isthing"]:
- continue
- mask = downsampled == segment["id"]
- if not np.any(mask):
- continue
- scene_objects.append(mask)
-
- mask_set = []
- for mask_id in potential_mask_ids:
- mask = downsampled == mask_id
- if not np.any(mask):
- continue
-
- if self.rigidness_mode is RigidnessMode.soft:
- foreground = [mask]
- elif self.rigidness_mode is RigidnessMode.rigid:
- foreground = scene_objects
- else:
- raise ValueError(f'Unexpected rigidness_mode: {rigidness_mode}')
-
- masks_params = self._move_mask(mask, foreground)
-
- full_mask = ObjectMask((panoptic_seg == mask_id).detach().cpu().numpy())
-
- for params in masks_params:
- aug_mask = deepcopy(full_mask)
- aug_mask.rescale(params["scaling_factor"], inplace=True)
- if params["horizontal_flip"]:
- aug_mask.horizontal_flip(inplace=True)
-
- vertical_shift = int(round(aug_mask.height * params["vertical_shift"]))
- horizontal_shift = int(round(aug_mask.width * params["horizontal_shift"]))
- aug_mask.shift(vertical=vertical_shift, horizontal=horizontal_shift, inplace=True)
- aug_mask = aug_mask.restore_full_mask().astype('uint8')
- if aug_mask.mean() <= self.min_mask_area:
- continue
- mask_set.append(aug_mask)
-
- if return_panoptic:
- return mask_set, panoptic_seg.detach().cpu().numpy()
- else:
- return mask_set
-
-
-def propose_random_square_crop(mask, min_overlap=0.5):
- height, width = mask.shape
- mask_ys, mask_xs = np.where(mask > 0.5) # mask==0 is known fragment and mask==1 is missing
-
- if height < width:
- crop_size = height
- obj_left, obj_right = mask_xs.min(), mask_xs.max()
- obj_width = obj_right - obj_left
- left_border = max(0, min(width - crop_size - 1, obj_left + obj_width * min_overlap - crop_size))
- right_border = max(left_border + 1, min(width - crop_size, obj_left + obj_width * min_overlap))
- start_x = np.random.randint(left_border, right_border)
- return start_x, 0, start_x + crop_size, height
- else:
- crop_size = width
- obj_top, obj_bottom = mask_ys.min(), mask_ys.max()
- obj_height = obj_bottom - obj_top
- top_border = max(0, min(height - crop_size - 1, obj_top + obj_height * min_overlap - crop_size))
- bottom_border = max(top_border + 1, min(height - crop_size, obj_top + obj_height * min_overlap))
- start_y = np.random.randint(top_border, bottom_border)
- return 0, start_y, width, start_y + crop_size
diff --git a/spaces/nanomenta/sketch_frame_interpolation/README.md b/spaces/nanomenta/sketch_frame_interpolation/README.md
deleted file mode 100644
index e84b0a85fb4c03fa520d6e04bbda6fe44809af3b..0000000000000000000000000000000000000000
--- a/spaces/nanomenta/sketch_frame_interpolation/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Sketch Frame Interpolation
-emoji: 🐠🐠
-colorFrom: blue
-colorTo: gray
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/nateraw/lavila/eval_narrator.py b/spaces/nateraw/lavila/eval_narrator.py
deleted file mode 100644
index 25d7d86eda334fbe8b4e9084acad47f7ceb2d2ae..0000000000000000000000000000000000000000
--- a/spaces/nateraw/lavila/eval_narrator.py
+++ /dev/null
@@ -1,308 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import os.path as osp
-import time
-from collections import OrderedDict
-
-import numpy as np
-# https://github.com/numpy/numpy/issues/21079
-try:
- import numpy.distutils
- numpy.distutils.__config__.blas_opt_info = np.distutils.__config__.blas_ilp64_opt_info
-except Exception:
- pass
-from nlgeval import NLGEval
-
-import torch
-import torchvision.transforms as transforms
-import torchvision.transforms._transforms_video as transforms_video
-
-from lavila.data import datasets
-from lavila.data.video_transforms import Permute, SpatialCrop, TemporalCrop
-from lavila.models import models
-from lavila.models.utils import inflate_positional_embeds
-from lavila.utils import distributed as dist_utils
-from lavila.utils.preprocess import generate_tokenizer
-
-
-def decode_one(generated_ids, tokenizer):
- # get the index of
- if tokenizer.eos_token_id == tokenizer.bos_token_id:
- if tokenizer.eos_token_id in generated_ids[1:].tolist():
- eos_id = generated_ids[1:].tolist().index(tokenizer.eos_token_id) + 1
- else:
- eos_id = len(generated_ids.tolist()) - 1
- elif tokenizer.eos_token_id in generated_ids.tolist():
- eos_id = generated_ids.tolist().index(tokenizer.eos_token_id)
- else:
- eos_id = len(generated_ids.tolist()) - 1
- generated_text_str = tokenizer.tokenizer.decode(generated_ids[1:eos_id].tolist())
- return generated_text_str
-
-
-def get_args_parser():
- parser = argparse.ArgumentParser(description='LAVILA 0-shot evaluations', add_help=False)
- parser.add_argument('--dataset', default='ego4d', type=str,
- choices=['ego4d'])
- parser.add_argument('--root',
- default='datasets/Ego4D/video_5min_chunks_288px/',
- type=str, help='path to dataset root')
- parser.add_argument('--metadata-val',
- default='datasets/Ego4D/ego4d_val.pkl',
- type=str, help='path to metadata file (val set)')
- parser.add_argument('--output-dir', default='./', type=str, help='output dir')
- parser.add_argument('--num-crops', default=1, type=int, help='number of crops in transforms')
- parser.add_argument('--num-clips', default=1, type=int, help='number of clips (for untrimmed videos, eg. Charades)')
- parser.add_argument('--clip-length', default=4, type=int, help='clip length')
- parser.add_argument('--clip-stride', default=16, type=int, help='clip stride')
- parser.add_argument('--sparse-sample', action='store_true', help='switch to sparse sampling')
- parser.add_argument('--batch-size', default=16, type=int, help='batch_size')
- # captioning options
- parser.add_argument('--caption-sample', default='multinomial_sample',
- choices=['multinomial_sample', 'beam_sample', 'group_beam_search'])
- parser.add_argument('--caption-top-k', default=None, type=int, help='top-k sampling (predecessor of nucleus sampling)')
- parser.add_argument('--caption-top-p', default=0.95, type=float, help='top-p sampling sampling (aka nucleus sampling)')
- parser.add_argument('--caption-num-beams', default=3, type=int)
- parser.add_argument('--caption-num-beam-groups', default=1, type=int)
- parser.add_argument('--caption-temperature', default=0.7, type=float)
- parser.add_argument('--caption-length-penalty', default=1.0, type=float)
- parser.add_argument('--caption-num-return-sequences', default=1, type=int)
- parser.add_argument('--caption-max-len', default=77, type=int)
- parser.add_argument('--caption-disable-visual', action='store_true')
- parser.add_argument('--caption-early-stop', action='store_true', help='early stopping to save computation')
- parser.add_argument('--caption-output-filename', default='caption.txt', type=str)
- # others
- parser.add_argument('--eval-freq', default=1000, type=int,
- help='percentage (1/eval_freq) of val data to evaluate (for fast prototyping)')
- parser.add_argument('--print-freq', default=10, type=int)
- parser.add_argument('-j', '--workers', default=10, type=int, metavar='N',
- help='number of data loading workers per process')
- parser.add_argument('--resume', default='', type=str, help='path to latest checkpoint')
- parser.add_argument('--use-half', action='store_true')
- return parser
-
-
-def main(args):
- if args.resume:
- ckpt_path = args.resume
- elif osp.isfile(osp.join(args.output_dir, 'checkpoint_best.pt')):
- ckpt_path = osp.join(args.output_dir, 'checkpoint_best.pt')
- else:
- raise Exception('no checkpoint found')
-
- ckpt = torch.load(ckpt_path, map_location='cpu')
-
- # create model
- state_dict = OrderedDict()
- for k, v in ckpt['state_dict'].items():
- state_dict[k.replace('module.', '')] = v
-
- old_args = ckpt['args']
- print('=> creating model: {}'.format(old_args.model))
- model = getattr(models, old_args.model)(
- text_use_cls_token=old_args.use_cls_token,
- project_embed_dim=old_args.project_embed_dim,
- gated_xattn=False if 'gated_xattn' not in old_args else old_args.gated_xattn,
- timesformer_gated_xattn=False if 'timesformer_gated_xattn' not in old_args else old_args.timesformer_gated_xattn,
- timesformer_freeze_space=False if 'timesformer_freeze_space' not in old_args else old_args.timesformer_freeze_space,
- freeze_lm_vclm=False if 'freeze_lm_vclm' not in old_args else old_args.freeze_lm_vclm,
- freeze_visual_vclm=False if 'freeze_visual_vclm' not in old_args else old_args.freeze_visual_vclm,
- num_frames=args.clip_length,
- drop_path_rate=0,
- )
- model.cuda()
- if 'TIMESFORMER' in old_args.model or 'EGOVLP' in old_args.model:
- # inflate weight
- print('=> inflating PE in models due to different frame numbers')
- state_dict = inflate_positional_embeds(
- model.state_dict(), state_dict,
- num_frames=args.clip_length,
- load_temporal_fix='bilinear',
- )
- model.load_state_dict(state_dict, strict=True)
- print("=> loaded resume checkpoint '{}' (epoch {}, best_metric = {})".format(args.resume, ckpt['epoch'], ckpt['best_acc1']))
-
- torch.backends.cudnn.benchmark = True
-
- tokenizer = generate_tokenizer(old_args.model)
- crop_size = 224 if '336PX' not in old_args.model else 336
- if args.num_crops == 1 and args.num_clips == 1:
- val_transform = transforms.Compose([
- Permute([3, 0, 1, 2]), # T H W C -> C T H W
- transforms.Resize(crop_size),
- transforms.CenterCrop(crop_size),
- (transforms_video.NormalizeVideo(mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375]) if ('OPENAI' not in old_args.model) else
- transforms_video.NormalizeVideo(mean=[108.3272985, 116.7460125, 104.09373615000001], std=[68.5005327, 66.6321579, 70.32316305])),
- ])
- else:
- val_transform = transforms.Compose([
- Permute([3, 0, 1, 2]), # T H W C -> C T H W
- transforms.Resize(crop_size),
- (transforms_video.NormalizeVideo(mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375]) if ('OPENAI' not in old_args.model) else
- transforms_video.NormalizeVideo(mean=[108.3272985, 116.7460125, 104.09373615000001], std=[68.5005327, 66.6321579, 70.32316305])),
- TemporalCrop(frames_per_clip=args.clip_length, stride=args.clip_length),
- SpatialCrop(crop_size=crop_size, num_crops=args.num_crops),
- ])
-
- val_dataset = datasets.VideoCaptionDatasetCLIP(
- args.dataset,
- args.root,
- args.metadata_val,
- transform=val_transform,
- is_training=False,
- tokenizer=tokenizer,
- clip_length=args.clip_length,
- clip_stride=args.clip_stride,
- sparse_sample=False,
- subsample_stride=args.eval_freq,
- )
-
- val_loader = torch.utils.data.DataLoader(
- val_dataset, batch_size=args.batch_size, shuffle=False,
- num_workers=args.workers, pin_memory=True, drop_last=False)
-
- validate_caption(val_loader, model, tokenizer, args.caption_output_filename, use_half=args.use_half)
-
-
-def validate_caption(val_loader, model, tokenizer, output_filename='caption.txt', use_half=False):
- model.eval()
- if args.use_half:
- model = model.half()
- nlgeval = NLGEval()
- f = open(output_filename, 'w')
- ppls_all = []
- ppls_with_teacher_all = []
- reference = []
- hypothesis = []
- end_time = time.time()
- id_offset = 0
- print('=> start forwarding')
- with torch.no_grad():
- for i, inputs in enumerate(val_loader):
- if i % args.print_freq == 0:
- print('finish batch {}/{} in {} sec'.format(i, len(val_loader), time.time() - end_time))
- end_time = time.time()
- images = inputs[0].cuda(non_blocking=True)
- if use_half:
- images = images.half()
- target = inputs[1].cuda(non_blocking=True)
-
- # encode images
- image_features = dist_utils.get_model(model).encode_image(images)
-
- # teacher forcing (to get standard ppl metric)
- generated_text_ids_with_teacher, ppls_with_teacher = dist_utils.get_model(model).generate(
- image_features,
- tokenizer,
- target=target,
- max_text_length=args.caption_max_len,
- top_k=args.caption_top_k,
- top_p=args.caption_top_p,
- teacher_forcing=True,
- early_stopping=args.caption_early_stop,
- )
-
- if args.caption_sample == 'multinomial_sample':
- assert args.caption_num_beam_groups == 1
- generated_text_ids, ppls = dist_utils.get_model(model).generate(
- image_features,
- tokenizer,
- target=target.repeat_interleave(args.caption_num_return_sequences, dim=0),
- max_text_length=args.caption_max_len,
- top_k=args.caption_top_k,
- top_p=args.caption_top_p,
- num_return_sequences=args.caption_num_return_sequences,
- temperature=args.caption_temperature,
- early_stopping=args.caption_early_stop,
- )
- elif args.caption_sample == 'beam_sample':
- assert args.caption_num_beam_groups == 1
- generated_text_ids, ppls = dist_utils.get_model(model).beam_sample(
- image_features,
- tokenizer,
- target=target,
- max_text_length=args.caption_max_len,
- top_k=args.caption_top_k,
- top_p=args.caption_top_p,
- temperature=args.caption_temperature,
- length_penalty=args.caption_length_penalty,
- num_beams=args.caption_num_beams,
- num_return_sequences=args.caption_num_return_sequences,
- early_stopping=args.caption_early_stop,
- )
- elif args.caption_sample == 'group_beam_search':
- assert args.caption_num_beam_groups > 1 and args.caption_num_beams % args.caption_num_beam_groups == 0
- generated_text_ids, ppls = dist_utils.get_model(model).group_beam_search(
- image_features,
- tokenizer,
- target=target if not args.caption_no_gt else None,
- max_text_length=args.caption_max_len,
- top_k=args.caption_top_k,
- top_p=args.caption_top_p,
- temperature=args.caption_temperature,
- length_penalty=args.caption_length_penalty,
- num_beams=args.caption_num_beams,
- num_beam_groups=args.caption_num_beam_groups,
- num_return_sequences=args.caption_num_return_sequences,
- early_stopping=args.caption_early_stop,
- )
- else:
- raise NotImplementedError
- ppls_all.append(ppls.reshape(-1, args.caption_num_return_sequences).mean(1))
- ppls_with_teacher_all.append(ppls_with_teacher)
-
- for j in range(generated_text_ids.shape[0] // args.caption_num_return_sequences):
- for k in range(args.caption_num_return_sequences):
- jj = j * args.caption_num_return_sequences + k
-
- generated_text_str = decode_one(generated_text_ids[jj], tokenizer)
- gt_text = decode_one(target[j], tokenizer)
- generated_text_str_with_teacher = decode_one(generated_text_ids_with_teacher[j], tokenizer)
-
- from transformers import BertTokenizer
- bert_tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
- gt_text = bert_tokenizer.decode(bert_tokenizer(gt_text)['input_ids'][1:-1])
- generated_text_str = bert_tokenizer.decode(bert_tokenizer(generated_text_str)['input_ids'][1:-1])
- generated_text_str_with_teacher = bert_tokenizer.decode(bert_tokenizer(generated_text_str_with_teacher)['input_ids'][1:-1])
- reference.append(gt_text)
- hypothesis.append(generated_text_str)
- s1 = '[{:6d}] Groundtruth | | {}'.format(id_offset + j, gt_text)
- s2 = '[{:6d}] Generated | PPL : {:9.3f} | {}'.format(id_offset + j, ppls[jj], generated_text_str)
- s3 = '[{:6d}] Generated (w/. teacher) | PPL : {:9.3f} | {}'.format(id_offset + j, ppls_with_teacher[j], generated_text_str_with_teacher)
- for s in [s1, s2, s3]:
- # if i % args.print_freq == 0:
- # print(s)
- f.write('{} \n'.format(s))
- id_offset += generated_text_ids.shape[0] // args.caption_num_return_sequences
-
- ppls_with_teacher_all = torch.cat(ppls_with_teacher_all, dim=0)
- ppls_all = torch.cat(ppls_all, dim=0)
-
- print('PPL (w/. teacher) = {:9.3f}'.format(ppls_with_teacher_all.mean().item()))
- print('PPL (w/o. teacher) = {:9.3f}'.format(ppls_all.mean().item()))
- f.write('PPL (w/. teacher) = {:9.3f} \n'.format(ppls_with_teacher_all.mean().item()))
- f.write('PPL (w/o. teacher) = {:9.3f} \n'.format(ppls_all.mean().item()))
-
- print('Avg length for reference: {:9.3f}'.format(sum(map(lambda sentence: len(sentence.split(' ')), reference)) / len(reference)))
- print('Avg length for hypothesis: {:9.3f}'.format(sum(map(lambda sentence: len(sentence.split(' ')), hypothesis)) / len(hypothesis)))
- f.write('Avg length for reference: {:9.3f} \n'.format(sum(map(lambda sentence: len(sentence.split(' ')), reference)) / len(reference)))
- f.write('Avg length for hypothesis: {:9.3f} \n'.format(sum(map(lambda sentence: len(sentence.split(' ')), hypothesis)) / len(hypothesis)))
-
- print('=> Calling NLGEval')
- f.write('=> Calling NLGEval\n')
- metrics_dict = nlgeval.compute_metrics([reference], hypothesis)
- for k in metrics_dict:
- print('{:16s} = {:9.3f}'.format(k, metrics_dict[k]))
- f.write('{:16s} = {:9.3f} \n'.format(k, metrics_dict[k]))
- f.close()
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser('lavila 0-shot evaluations', parents=[get_args_parser()])
- args = parser.parse_args()
- main(args)
diff --git a/spaces/nateraw/lavila/run_with_submitit_finetune_retrieval.py b/spaces/nateraw/lavila/run_with_submitit_finetune_retrieval.py
deleted file mode 100644
index 04f40b6d7b842b2d41ec64404ec33cd01ae01d0a..0000000000000000000000000000000000000000
--- a/spaces/nateraw/lavila/run_with_submitit_finetune_retrieval.py
+++ /dev/null
@@ -1,125 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-"""
-A script to run multinode training with submitit.
-"""
-import argparse
-import os
-import uuid
-from pathlib import Path
-
-import main_finetune_retrieval as main_finetune
-import submitit
-
-
-def parse_args():
- parser = main_finetune.get_args_parser()
- parser = argparse.ArgumentParser("Submitit for lavila fine-tuning", parents=[parser])
- parser.add_argument("--ngpus", default=8, type=int, help="Number of gpus to request on each node")
- parser.add_argument("--nodes", default=8, type=int, help="Number of nodes to request")
- parser.add_argument("--timeout", default=2880, type=int, help="Duration of the job")
- parser.add_argument("--job_dir", default="", type=str, help="Job dir. Leave empty for automatic.")
-
- parser.add_argument("--partition", default="learnlab", type=str, help="Partition where to submit")
- parser.add_argument("--use_volta32", action='store_true', help="Big models? Use this")
- parser.add_argument('--comment', default="", type=str,
- help='Comment to pass to scheduler, e.g. priority message')
- return parser.parse_args()
-
-
-def get_shared_folder() -> Path:
- user = os.getenv("USER")
- if Path("/checkpoint/").is_dir():
- p = Path(f"/checkpoint/{user}/experiments/lavila_ft")
- p.mkdir(exist_ok=True)
- return p
- raise RuntimeError("No shared folder available")
-
-
-def get_init_file():
- # Init file must not exist, but it's parent dir must exist.
- os.makedirs(str(get_shared_folder()), exist_ok=True)
- init_file = get_shared_folder() / f"{uuid.uuid4().hex}_init"
- if init_file.exists():
- os.remove(str(init_file))
- return init_file
-
-
-class Trainer(object):
- def __init__(self, args):
- self.args = args
-
- def __call__(self):
- import main_finetune_retrieval as main_finetune
-
- self._setup_gpu_args()
- main_finetune.main(self.args)
-
- def checkpoint(self):
- import submitit
-
- self.args.dist_url = get_init_file().as_uri()
- print("Requeuing ", self.args)
- empty_trainer = type(self)(self.args)
- return submitit.helpers.DelayedSubmission(empty_trainer)
-
- def _setup_gpu_args(self):
- import submitit
- from pathlib import Path
-
- job_env = submitit.JobEnvironment()
- self.args.output_dir = Path(str(self.args.output_dir).replace("%j", str(job_env.job_id)))
- self.args.gpu = job_env.local_rank
- self.args.rank = job_env.global_rank
- self.args.world_size = job_env.num_tasks
- print(f"Process group: {job_env.num_tasks} tasks, rank: {job_env.global_rank}")
-
-
-def main():
- args = parse_args()
- if args.job_dir == "":
- args.job_dir = get_shared_folder() / "%j"
-
- # Note that the folder will depend on the job_id, to easily track experiments
- executor = submitit.AutoExecutor(folder=args.job_dir, slurm_max_num_timeout=30)
-
- num_gpus_per_node = args.ngpus
- nodes = args.nodes
- timeout_min = args.timeout
-
- partition = args.partition
- kwargs = {}
- if args.use_volta32:
- kwargs['slurm_constraint'] = 'volta32gb'
- if args.comment:
- kwargs['slurm_comment'] = args.comment
-
- executor.update_parameters(
- mem_gb=40 * num_gpus_per_node,
- gpus_per_node=num_gpus_per_node,
- tasks_per_node=num_gpus_per_node, # one task per GPU
- cpus_per_task=10,
- nodes=nodes,
- timeout_min=timeout_min, # max is 60 * 72
- # Below are cluster dependent parameters
- slurm_partition=partition,
- slurm_signal_delay_s=120,
- **kwargs
- )
-
- executor.update_parameters(name="lavila_ft")
-
- args.dist_url = get_init_file().as_uri()
- args.output_dir = args.job_dir
-
- trainer = Trainer(args)
- job = executor.submit(trainer)
-
- print("Submitted job_id:", job.job_id)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/nateraw/yolov6/yolov6/solver/build.py b/spaces/nateraw/yolov6/yolov6/solver/build.py
deleted file mode 100644
index 0684ff7bfae7db248b29850d8ed2e8a33ff623b1..0000000000000000000000000000000000000000
--- a/spaces/nateraw/yolov6/yolov6/solver/build.py
+++ /dev/null
@@ -1,42 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-import os
-import math
-
-import torch
-import torch.nn as nn
-
-
-def build_optimizer(cfg, model):
- """ Build optimizer from cfg file."""
- g_bnw, g_w, g_b = [], [], []
- for v in model.modules():
- if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter):
- g_b.append(v.bias)
- if isinstance(v, nn.BatchNorm2d):
- g_bnw.append(v.weight)
- elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter):
- g_w.append(v.weight)
-
- assert cfg.solver.optim == 'SGD' or 'Adam', 'ERROR: unknown optimizer, use SGD defaulted'
- if cfg.solver.optim == 'SGD':
- optimizer = torch.optim.SGD(g_bnw, lr=cfg.solver.lr0, momentum=cfg.solver.momentum, nesterov=True)
- elif cfg.solver.optim == 'Adam':
- optimizer = torch.optim.Adam(g_bnw, lr=cfg.solver.lr0, betas=(cfg.solver.momentum, 0.999))
-
- optimizer.add_param_group({'params': g_w, 'weight_decay': cfg.solver.weight_decay})
- optimizer.add_param_group({'params': g_b})
-
- del g_bnw, g_w, g_b
- return optimizer
-
-
-def build_lr_scheduler(cfg, optimizer, epochs):
- """Build learning rate scheduler from cfg file."""
- if cfg.solver.lr_scheduler == 'Cosine':
- lf = lambda x: ((1 - math.cos(x * math.pi / epochs)) / 2) * (cfg.solver.lrf - 1) + 1
- else:
- LOGGER.error('unknown lr scheduler, use Cosine defaulted')
-
- scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lf)
- return scheduler, lf
diff --git a/spaces/nathanTQ/ChatDev/camel/model_backend.py b/spaces/nathanTQ/ChatDev/camel/model_backend.py
deleted file mode 100644
index 6d95dc562bbe34438acc8548fc5f5015dda08c1d..0000000000000000000000000000000000000000
--- a/spaces/nathanTQ/ChatDev/camel/model_backend.py
+++ /dev/null
@@ -1,127 +0,0 @@
-# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. ===========
-# Licensed under the Apache License, Version 2.0 (the “License”);
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an “AS IS” BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. ===========
-from abc import ABC, abstractmethod
-from typing import Any, Dict
-
-import openai
-import tiktoken
-
-from camel.typing import ModelType
-from chatdev.utils import log_and_print_online
-
-
-class ModelBackend(ABC):
- r"""Base class for different model backends.
- May be OpenAI API, a local LLM, a stub for unit tests, etc."""
-
- @abstractmethod
- def run(self, *args, **kwargs) -> Dict[str, Any]:
- r"""Runs the query to the backend model.
-
- Raises:
- RuntimeError: if the return value from OpenAI API
- is not a dict that is expected.
-
- Returns:
- Dict[str, Any]: All backends must return a dict in OpenAI format.
- """
- pass
-
-
-class OpenAIModel(ModelBackend):
- r"""OpenAI API in a unified ModelBackend interface."""
-
- def __init__(self, model_type: ModelType, model_config_dict: Dict) -> None:
- super().__init__()
- self.model_type = model_type
- self.model_config_dict = model_config_dict
-
- def run(self, *args, **kwargs) -> Dict[str, Any]:
- string = "\n".join([message["content"] for message in kwargs["messages"]])
- encoding = tiktoken.encoding_for_model(self.model_type.value)
- num_prompt_tokens = len(encoding.encode(string))
- gap_between_send_receive = 50 # known issue
- num_prompt_tokens += gap_between_send_receive
-
- num_max_token_map = {
- "gpt-3.5-turbo": 4096,
- "gpt-3.5-turbo-16k": 16384,
- "gpt-3.5-turbo-0613": 4096,
- "gpt-3.5-turbo-16k-0613": 16384,
- "gpt-4": 8192,
- "gpt-4-0613": 8192,
- "gpt-4-32k": 32768,
- }
- num_max_token = num_max_token_map[self.model_type.value]
- num_max_completion_tokens = num_max_token - num_prompt_tokens
- self.model_config_dict['max_tokens'] = num_max_completion_tokens
- response = openai.ChatCompletion.create(*args, **kwargs,
- model=self.model_type.value,
- **self.model_config_dict)
-
- log_and_print_online(
- "**[OpenAI_Usage_Info Receive]**\nprompt_tokens: {}\ncompletion_tokens: {}\ntotal_tokens: {}\n".format(
- response["usage"]["prompt_tokens"], response["usage"]["completion_tokens"],
- response["usage"]["total_tokens"]))
- if not isinstance(response, Dict):
- raise RuntimeError("Unexpected return from OpenAI API")
- return response
-
-
-class StubModel(ModelBackend):
- r"""A dummy model used for unit tests."""
-
- def __init__(self, *args, **kwargs) -> None:
- super().__init__()
-
- def run(self, *args, **kwargs) -> Dict[str, Any]:
- ARBITRARY_STRING = "Lorem Ipsum"
-
- return dict(
- id="stub_model_id",
- usage=dict(),
- choices=[
- dict(finish_reason="stop",
- message=dict(content=ARBITRARY_STRING, role="assistant"))
- ],
- )
-
-
-class ModelFactory:
- r"""Factory of backend models.
-
- Raises:
- ValueError: in case the provided model type is unknown.
- """
-
- @staticmethod
- def create(model_type: ModelType, model_config_dict: Dict) -> ModelBackend:
- default_model_type = ModelType.GPT_3_5_TURBO
-
- if model_type in {
- ModelType.GPT_3_5_TURBO, ModelType.GPT_4, ModelType.GPT_4_32k,
- None
- }:
- model_class = OpenAIModel
- elif model_type == ModelType.STUB:
- model_class = StubModel
- else:
- raise ValueError("Unknown model")
-
- if model_type is None:
- model_type = default_model_type
-
- # log_and_print_online("Model Type: {}".format(model_type))
- inst = model_class(model_type, model_config_dict)
- return inst
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ableton Live Suite 9.7.5 Crack For Windows - [CrackzSoft] Download WORK.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ableton Live Suite 9.7.5 Crack For Windows - [CrackzSoft] Download WORK.md
deleted file mode 100644
index 011fc2c0731db7f533be0892b32579d78dfd39f1..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ableton Live Suite 9.7.5 Crack For Windows - [CrackzSoft] Download WORK.md
+++ /dev/null
@@ -1,45 +0,0 @@
-
-
Ableton Live Suite 9.7.5 Crack For Windows - [CrackzSoft] Download: A Complete Guide
-
-
If you are looking for a powerful software to create and perform music live on stage, you might have heard of Ableton Live Suite 9.7.5. This is a digital audio workstation (DAW) that allows you to record, edit, mix and master your musical ideas with ease and flexibility. But what if you don't have the budget to buy the full version of this software? Is there a way to get it for free?
-
-
In this article, we will show you how to download and install Ableton Live Suite 9.7.5 Crack For Windows - [CrackzSoft] Download, a cracked version of the software that bypasses the activation process and lets you use all the features without paying a dime. We will also explain the benefits and risks of using a cracked software, and how to avoid malware and viruses that might come with it.
-
Ableton Live Suite 9.7.5 Crack For Windows - [CrackzSoft] Download
Ableton Live Suite 9.7.5 is the latest version of Ableton Live, a software for creating musical ideas, turning them into finished songs, and even taking them onto the stage. It is designed for use in live performance as well as for production, and it offers a unique workflow that lets you freely and independently start and stop any number of audio or MIDI loops in real-time, without interrupting your creative flow.
-
-
Ableton Live Suite 9.7.5 comes with a lot of features and tools that make it a complete solution for music creation and performance. Some of these features are:
-
-
-
Multitrack recording up to 32-bit/192 kHz
-
Advanced warping and real-time time stretching
-
Unlimited instruments, audio effects and MIDI effects per project
-
VST and Audio Unit support
-
Group tracks and MIDI Clock/sync
-
Nondestructive editing with unlimited undo
-
Powerful MIDI sequencing of software and hardware instruments
-
ReWire, Time signature changes, and Track Freeze
-
MIDI output to hardware synths
-
MIDI remote control instant mapping
-
WAV, AIFF, MP3, Ogg Vorbis, FLAC file support
-
And much more...
-
-
-
Ableton Live Suite 9.7.5 also comes with a collection of instruments, sounds, samples and loops that you can use to create any kind of music genre you want. You can also customize your own sounds and effects with the built-in synthesizers, samplers, drum machines and audio effects.
-
-
How to Download and Install Ableton Live Suite 9.7.5 Crack For Windows - [CrackzSoft] Download?
-
-
If you want to get Ableton Live Suite 9.7.5 for free, you will need to download a cracked version of the software from a reliable source. A cracked version is a modified version of the software that has been hacked or patched to bypass the activation process and make it work without a license key or serial number.
-
-
One of the sources that offer Ableton Live Suite 9.7.5 Crack For Windows - [CrackzSoft] Download is [^1^], a website that provides free downloads of various software and games for Windows and Mac OS X. Here are the steps to download and install Ableton Live Suite 9.7.5 Crack For Windows - [CrackzSoft] Download from this website:
-
-
-
Go to [^1^] and search for "Ableton Live Suite 9.7.5" in the search box.
-
Select the first result that says "Ableton Live Suite 9.7.5 Free Download".
-
Scroll down to the bottom of the page and click on the green button that says "Download Now".
-
You will be redirected to another page where you will need to complete a captcha verification to prove that you
- 7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Adobe Framemaker 11 Amtlib Dll.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Adobe Framemaker 11 Amtlib Dll.md
deleted file mode 100644
index 5fc1fb525bae184dc453d658f9ad915e7f801d39..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Adobe Framemaker 11 Amtlib Dll.md
+++ /dev/null
@@ -1,36 +0,0 @@
-
-
How to Fix Adobe FrameMaker 11 Amtlib.dll Error
-
If you are using Adobe FrameMaker 11, you may encounter an error message that says "amtlib.dll is either not designed to run on Windows or it contains an error. Try installing the program again using the original installation media or contact your system administrator or the software vender for support."[^3^]
This error may occur due to corrupted or missing dll files, incompatible versions of FrameMaker and Windows, or other system issues. In this article, we will show you some possible solutions to fix this error and restore your FrameMaker functionality.
-
Solution 1: Repair or Reinstall FrameMaker 11
-
One of the simplest ways to fix the amtlib.dll error is to repair or reinstall FrameMaker 11. This will ensure that you have the latest and compatible version of the software and its components. To do this, follow these steps:
-
-
Open Programs & Features in Control Panel.
-
Find Adobe FrameMaker 11 in the list of installed programs and select it.
-
Click on Change/Uninstall and choose Repair or Reinstall from the options.
-
Follow the on-screen instructions to complete the process.
-
Restart your computer and launch FrameMaker 11 again.
-
-
Solution 2: Download and Replace Amtlib.dll File
-
If repairing or reinstalling FrameMaker 11 does not work, you can try to download and replace the amtlib.dll file manually. However, this is not recommended as it may cause further problems if you download a wrong or malicious file. You should only do this if you are confident about the source and compatibility of the file. To do this, follow these steps:
-
-
Find a reliable website that offers dll file downloads, such as https://www.dll-files.com/amtlib.dll.html[^3^]. Make sure you download the file that matches your FrameMaker 11 version and Windows system.
-
Extract the downloaded file and copy it to a safe location.
-
Navigate to the FrameMaker 11 install location. The default install location is: C:\\Program Files\\Adobe\\Adobe FrameMaker 11[^5^].
-
Find and rename the existing amtlib.dll file to something else, such as amtlib.dll.old.
-
Paste the new amtlib.dll file into the same folder.
-
Restart your computer and launch FrameMaker 11 again.
-
-
Solution 3: Update FrameMaker 11 to a Newer Version
-
If none of the above solutions work, you may need to update FrameMaker 11 to a newer version that is compatible with your Windows system and has fixed the critical vulnerabilities that may cause the amtlib.dll error. To do this, follow these steps:
-
-
-
Visit https://www.adobe.com/products/framemaker.html[^4^] and choose a plan that suits your needs. You can also request a free trial or a callback from Adobe customer support.
-
Download and install the latest version of FrameMaker according to the instructions provided by Adobe.
-
Enter your serial number or sign up for a subscription to activate the product.
-
Launch FrameMaker and enjoy its features.
-
-
We hope this article has helped you fix the Adobe FrameMaker 11 amtlib.dll error. If you have any questions or feedback, please let us know in the comments below.
81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Download Elcomsoft Wireless Security Auditor Full Crack [BETTER].md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Download Elcomsoft Wireless Security Auditor Full Crack [BETTER].md
deleted file mode 100644
index 094dbfb9712d3cc6febe3d051b95cd31854aaaa2..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Download Elcomsoft Wireless Security Auditor Full Crack [BETTER].md
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
How to Use Elcomsoft Wireless Security Auditor Full to Test Your Wi-Fi Network Security
-
-
Wi-Fi networks are ubiquitous and convenient, but they also pose a security risk. If your Wi-Fi network is not properly secured, hackers can easily access your data, devices, and online accounts. That's why you need to test your Wi-Fi network security regularly and fix any vulnerabilities that you find.
-
-
One of the tools that you can use to test your Wi-Fi network security is Elcomsoft Wireless Security Auditor Full. This is a powerful and comprehensive software that can perform various types of attacks on your Wi-Fi network, such as:
-
download elcomsoft wireless security auditor full crack
Brute-force attacks: trying different combinations of passwords until the correct one is found.
-
Dictionary attacks: trying passwords from a predefined list of common or likely words.
-
Mask attacks: trying passwords that match a certain pattern or format.
-
Hybrid attacks: combining different methods of password guessing.
-
WPS attacks: exploiting the Wi-Fi Protected Setup feature that allows devices to connect to the network without entering a password.
-
-
-
Elcomsoft Wireless Security Auditor Full can also analyze the security of your Wi-Fi network by checking the encryption type, the signal strength, the number of connected devices, and other parameters. It can also generate reports and recommendations on how to improve your Wi-Fi network security.
-
-
To use Elcomsoft Wireless Security Auditor Full, you need to have a compatible wireless adapter that supports monitor mode and packet injection. You also need to have a license key to activate the full version of the software. You can download the trial version of Elcomsoft Wireless Security Auditor Full from here and purchase the license key from here.
-
-
Once you have installed and activated Elcomsoft Wireless Security Auditor Full, you can follow these steps to test your Wi-Fi network security:
-
-
-
Launch Elcomsoft Wireless Security Auditor Full and click on the "New Project" button.
-
Select your wireless adapter from the list and click on the "Start Scan" button.
-
Wait for the scan to finish and select your target Wi-Fi network from the list.
-
Click on the "Attack" button and choose the type of attack that you want to perform.
-
Configure the attack settings according to your preferences and click on the "Start" button.
-
Wait for the attack to finish and check if the password of your target Wi-Fi network has been cracked.
-
If the password has been cracked, change it immediately and follow the recommendations on how to improve your Wi-Fi network security.
-
-
-
By using Elcomsoft Wireless Security Auditor Full, you can test your Wi-Fi network security and prevent hackers from accessing your data, devices, and online accounts. Remember to test your Wi-Fi network security regularly and keep your software updated to ensure optimal protection.
81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Dwg To Pdf Converter Mx Serial Key.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Dwg To Pdf Converter Mx Serial Key.md
deleted file mode 100644
index 77785b50825089907da7b54130ab0b1c31652654..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Dwg To Pdf Converter Mx Serial Key.md
+++ /dev/null
@@ -1,125 +0,0 @@
-
-
DWG to PDF Converter MX Serial Key: How to Convert Your CAD Files Easily and Securely
-
If you are looking for a simple and effective way to convert your CAD files from DWG, DXF, or DWF format to PDF format, you might want to try DWG to PDF Converter MX. This is a powerful software that allows you to batch convert your CAD files into high-quality PDF files with various options and features. In this article, we will show you how to download, install, and use DWG to PDF Converter MX with a valid serial key. We will also answer some of the most frequently asked questions about this software and provide some tips and tricks on how to troubleshoot common issues and errors.
-
What is DWG to PDF Converter MX?
-
DWG to PDF Converter MX is a stand-alone software that does not require AutoCAD or any other CAD software. It can convert DWG, DXF, and DWF files into PDF files quickly and easily. It supports all versions of DWG, DXF, and DWF formats from R2.5-2021. It also supports AutoCAD pen sets file (*.ctb) and OLE entity (such as inline Word, Excel document objects in the DWG files).
Save the setup file (dwg2pdfmx.exe) to your computer and run it.
-
Follow the instructions on the screen to complete the installation process.
-
Launch the software and enter your serial key when prompted.
-
-
If you do not have a serial key, you can use the trial version for 15 days with some limitations. You can also purchase a full version from the website or contact the support team for more information.
-
What is a serial key and why do you need it?
-
A serial key is a unique code that is used to activate and register a software product. It usually consists of a combination of letters and numbers that is provided by the software developer or vendor. A serial key is also known as a product key, license key, activation key, or registration key.
-
The difference between a trial version and a full version
-
A trial version is a free version of a software product that allows you to test its features and functions for a limited period of time. A trial version usually has some restrictions or limitations, such as watermark, file size, output quality, or number of conversions. A trial version is intended to give you an idea of how the software works and whether it meets your needs and expectations.
-
A full version is a paid version of a software product that gives you access to all its features and functions without any restrictions or limitations. A full version also provides you with technical support and updates from the software developer or vendor. A full version is intended to give you the best user experience and satisfaction with the software.
-
How to get a valid serial key for DWG to PDF Converter MX
-
To get a valid serial key for DWG to PDF Converter MX, you need to purchase a full version from the official website of DWG to PDF Converter MX. The price of the full version is $99.50 USD for one user license. You can also get discounts for multiple user licenses or volume licenses. You can pay by credit card, PayPal, bank transfer, or other methods.
-
After you complete your payment, you will receive an email with your serial key and download link. You can also find your serial key in your account on the website. You need to enter your serial key in the software to activate and register it. You can use your serial key on one computer only. If you want to use it on another computer, you need to uninstall it from the first computer and install it on the second computer.
-
-
How to use DWG to PDF Converter MX to convert your CAD files
-
Using DWG to PDF Converter MX to convert your CAD files is very easy and fast. You just need to follow these steps:
-
Step-by-step guide on how to convert DWG, DXF, and DWF files to PDF
-
-
Launch DWG to PDF Converter MX and click on the "Add Files" button to add the CAD files that you want to convert. You can also drag and drop the files into the software window.
-
Select the output folder where you want to save the converted PDF files.
-
Click on the "Options" button to customize your output settings, such as page size, layout, quality, encryption, watermark, bookmarks, etc.
-
Click on the "Convert Now" button to start the conversion process. You can see the progress and status of each file in the software window.
-
When the conversion is done, you can open the output folder and view the converted PDF files with any PDF viewer or editor.
-
-
Tips and tricks on how to customize your output settings and optimize your conversion results
-
Here are some tips and tricks that can help you customize your output settings and optimize your conversion results:
-
-
If you want to convert only a specific part of a CAD file, you can use the "Clip" function in the software. You can select an area or a window in the CAD file and convert it into a PDF file.
-
If you want to merge multiple CAD files into one PDF file, you can use the "Combine" function in the software. You can select several CAD files and combine them into one PDF file with bookmarks.
-
If you want to split a large CAD file into smaller PDF files, you can use the "Split" function in the software. You can split a CAD file by page number, file size, or layout name.
-
If you want to add a table to your PDF file, you can use the "Table" function in the software. You can create a table with rows and columns and insert data into it. You can also adjust the font, color, border, and alignment of the table.
-
If you want to add an image to your PDF file, you can use the "Image" function in the software. You can insert an image from your computer or from a URL. You can also resize, rotate, crop, and flip the image.
-
If you want to add a text to your PDF file, you can use the "Text" function in the software. You can type or paste any text into the PDF file. You can also change the font, size, color, style, and alignment of the text.
-
-
How to troubleshoot common issues and errors with DWG to PDF Converter MX
-
Although DWG to PDF Converter MX is a reliable and stable software, you may encounter some issues and errors while using it. Here are some of the common problems and their solutions:
-
How to fix invalid serial key errors
-
If you get an error message that says "Invalid serial key" or "Serial key expired", it means that your serial key is not valid or has expired. This may happen for several reasons, such as:
-
-
You have entered the wrong serial key or made a typo.
-
You have used the same serial key on more than one computer.
-
You have changed your computer hardware or operating system.
-
You have downloaded a pirated or cracked version of the software.
-
-
To fix this problem, you need to do the following:
-
-
Make sure that you have entered the correct serial key without any spaces or extra characters.
-
Make sure that you have purchased a full version from the official website of DWG to PDF Converter MX and not from any other sources.
-
Make sure that you have uninstalled the software from your previous computer before installing it on a new one.
-
Contact the support team of DWG to PDF Converter MX and provide them with your order information and serial key. They will help you activate and register your software.
-
-
How to solve compatibility and performance issues
-
If you experience any compatibility or performance issues with DWG to PDF Converter MX, such as crashing, freezing, slow conversion, or poor output quality, it may be due to some factors, such as:
-
-
Your computer does not meet the minimum system requirements for DWG to PDF Converter MX.
-
Your CAD files are corrupted, damaged, or protected by passwords or encryption.
-
Your output settings are too high or too low for your PDF files.
-
Your antivirus or firewall software is blocking or interfering with DWG to PDF Converter MX.
-
-
To solve this problem, you need to do the following:
-
-
Check the system requirements for DWG to PDF Converter MX and make sure that your computer meets them. The system requirements are:
-
-
Operating System
Windows XP/Vista/7/8/10 (32-bit and 64-bit)
-
Processor
Pentium III 1500 MHz or higher
-
Memory
512 MB RAM or more
-
Disk Space
100 MB free hard disk space or more
-
Display
1024 x 768 resolution or higher
-
Internet Connection
Required for activation and updates
-
-
Scan your CAD files with an antivirus software and repair them with a CAD repair tool if they are corrupted, damaged, or protected.
-
Adjust your output settings according to your needs and preferences. You can lower the DPI parameter, reduce the page size, compress the output format, or remove unnecessary elements from your PDF files.
-
Disable or whitelist your antivirus or firewall software for DWG to PDF Converter MX. You can also temporarily turn off your internet connection while using the software.
-
-
Conclusion
-
DWG to PDF Converter MX is a great software that can help you convert your CAD files from DWG, DXF, or DWF format to PDF format easily and securely. It has many features and benefits that can enhance your conversion results and user experience. It is also easy to download, install, and use with a valid serial key. If you have any questions or issues with the software, you can contact the support team of DWG to PDF Converter MX for assistance.
-
If you want to try DWG to PDF Converter MX for yourself, you can download the trial version from the official website of DWG to PDF Converter MX and use it for 15 days. If you are satisfied with the software, you can purchase the full version and get a serial key to activate and register it. You can also get discounts for multiple user licenses or volume licenses.
-
DWG to PDF Converter MX is the best solution for converting your CAD files to PDF files. Don't miss this opportunity and get your copy today!
-
FAQs
-
What are the system requirements for DWG to PDF Converter MX?
-
The system requirements for DWG to PDF Converter MX are:
-
-
Operating System
Windows XP/Vista/7/8/10 (32-bit and 64-bit)
-
Processor
Pentium III 1500 MHz or higher
-
Memory
512 MB RAM or more
-
Disk Space
100 MB free hard disk space or more
-
Display
1024 x 768 resolution or higher
-
Internet Connection
Required for activation and updates
-
-
How much does DWG to PDF Converter MX cost?
-
The price of DWG to PDF Converter MX is $99.50 USD for one user license. You can also get discounts for multiple user licenses or volume licenses. You can pay by credit card, PayPal, bank transfer, or other methods.
-
Is DWG to PDF Converter MX safe and reliable?
-
Yes, DWG to PDF Converter MX is safe and reliable. It does not contain any viruses, malware, spyware, or adware. It does not modify or damage your original CAD files. It does not collect or share any of your personal or confidential information. It is certified by several reputable software review sites and trusted by thousands of users worldwide.
-
Can I convert multiple CAD files at once with DWG to PDF Converter MX?
-
Yes, you can convert multiple CAD files at once with DWG to PDF Converter MX. You can add as many CAD files as you want to the software and batch convert them into PDF files with one click. You can also set different output settings for each CAD file or apply the same settings to all of them.
-
How can I contact the support team of DWG to PDF Converter MX?
-
If you have any questions, issues, feedback, or suggestions about DWG to PDF Converter MX, you can contact the support team by email at support@dwgtool.com. They will reply to you within 24 hours and provide you with professional and friendly assistance.
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key Keygen Free.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key Keygen Free.md
deleted file mode 100644
index b34676d2a3c1eab964f283a12d7a34beaa79afb9..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key Keygen Free.md
+++ /dev/null
@@ -1,28 +0,0 @@
-
-```html
-
Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key keygen: Learn Any Language Easily and Quickly
-
Do you want to learn a new language, but don't have the time or money to enroll in a course? Do you wish you could speak fluently with native speakers, without feeling embarrassed or frustrated? If you answered yes, then you need Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key keygen.
-
Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key keygen
Rosetta Stone is the world's leading software for learning languages. It uses a natural and intuitive method that helps you learn through immersion, just like you learned your first language. You will listen, speak, read and write in your new language, without memorizing rules or translations. You will also get feedback and guidance from Rosetta Stone's advanced speech recognition technology, which helps you improve your pronunciation and accent.
-
With Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key keygen, you can access all the features and benefits of Rosetta Stone, without paying a dime. You can choose from over 30 languages, including English, Spanish, French, German, Italian, Chinese, Japanese and more. You can also customize your learning plan, track your progress and sync your lessons across your devices.
-
But how can you get Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key keygen? It's simple. Just follow these steps:
-
-
Download Rosetta Stone 4.1.15 from the official website or from any trusted source.
Unzip the crack file and copy the contents to the installation folder of Rosetta Stone.
-
Run the crack file and generate a serial key.
-
Enter the serial key when prompted by Rosetta Stone.
-
Enjoy learning any language with Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key keygen!
-
-
Don't miss this opportunity to learn any language easily and quickly with Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key keygen. Download it now and start your journey to fluency!
-```
-
-```html
-
Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key keygen is the best way to learn any language at your own pace and convenience. You don't need to worry about deadlines, schedules, or exams. You can learn whenever and wherever you want, whether it's at home, in the office, or on the go. You can also switch between languages and levels as you wish, without losing your progress.
-
Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key keygen is also the most fun and engaging way to learn any language. You will not get bored or frustrated with boring drills or exercises. Instead, you will enjoy interactive and immersive activities that will keep you motivated and interested. You will also learn from real-life scenarios and situations that will prepare you for real-world conversations and interactions.
-
-
Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key keygen is the ultimate solution for anyone who wants to learn any language easily and quickly. It is trusted by millions of learners and educators around the world, who have achieved amazing results with Rosetta Stone. Whether you want to learn a new language for personal, professional, or academic reasons, Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key keygen will help you achieve your goals.
-``` cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/nickil/weakly-supervised-parsing/weakly_supervised_parser/utils/cky_algorithm.py b/spaces/nickil/weakly-supervised-parsing/weakly_supervised_parser/utils/cky_algorithm.py
deleted file mode 100644
index 1de174b5b160c5288fbb9f738e58c694791f29c3..0000000000000000000000000000000000000000
--- a/spaces/nickil/weakly-supervised-parsing/weakly_supervised_parser/utils/cky_algorithm.py
+++ /dev/null
@@ -1,91 +0,0 @@
-import re
-import numpy as np
-from weakly_supervised_parser.tree.helpers import Tree
-
-
-def CKY(sent_all, prob_s, label_s, verbose=False):
- r"""
- choose tree with maximum expected number of constituents,
- or max \sum_{(i,j) \in tree} p((i,j) is constituent)
- """
-
- def backpt_to_tree(sent, backpt, label_table):
- def to_tree(i, j):
- if j - i == 1:
- return Tree(sent[i], None, sent[i])
- else:
- k = backpt[i][j]
- return Tree(label_table[i][j], [to_tree(i, k), to_tree(k, j)], None)
-
- return to_tree(0, len(sent))
-
- def to_table(value_s, i_s, j_s):
- table = [[None for _ in range(np.max(j_s) + 1)] for _ in range(np.max(i_s) + 1)]
- for value, i, j in zip(value_s, i_s, j_s):
- table[i][j] = value
- return table
-
- # produce list of spans to pass to is_constituent, while keeping track of which sentence
- sent_s, i_s, j_s = [], [], []
- idx_all = []
- for sent in sent_all:
- start = len(sent_s)
- for i in range(len(sent)):
- for j in range(i + 1, len(sent) + 1):
- sent_s.append(sent)
- i_s.append(i)
- j_s.append(j)
- idx_all.append((start, len(sent_s)))
-
- # feed spans to is_constituent
- # prob_s, label_s = self.is_constituent(sent_s, i_s, j_s, verbose = verbose)
-
- # given span probs, perform CKY to get best tree for each sentence.
- tree_all, prob_all = [], []
- for sent, idx in zip(sent_all, idx_all):
- # first, use tables to keep track of things
- k, l = idx
- prob, label = prob_s[k:l], label_s[k:l]
- i, j = i_s[k:l], j_s[k:l]
-
- prob_table = to_table(prob, i, j)
- label_table = to_table(label, i, j)
-
- # perform cky using scores and backpointers
- score_table = [[None for _ in range(len(sent) + 1)] for _ in range(len(sent))]
- backpt_table = [[None for _ in range(len(sent) + 1)] for _ in range(len(sent))]
- for i in range(len(sent)): # base case: single words
- score_table[i][i + 1] = 1
- for j in range(2, len(sent) + 1):
- for i in range(j - 2, -1, -1):
- best, argmax = -np.inf, None
- for k in range(i + 1, j): # find splitpoint
- score = score_table[i][k] + score_table[k][j]
- if score > best:
- best, argmax = score, k
- score_table[i][j] = best + prob_table[i][j]
- backpt_table[i][j] = argmax
-
- tree = backpt_to_tree(sent, backpt_table, label_table)
- tree_all.append(tree)
- prob_all.append(prob_table)
-
- return tree_all, prob_all
-
-
-def get_best_parse(sentence, spans):
- flattened_scores = []
- for i in range(spans.shape[0]):
- for j in range(spans.shape[1]):
- if i > j:
- continue
- else:
- flattened_scores.append(spans[i, j])
- prob_s, label_s = flattened_scores, ["S"] * len(flattened_scores)
- # print(prob_s, label_s)
- trees, _ = CKY(sent_all=sentence, prob_s=prob_s, label_s=label_s)
- s = str(trees[0])
- # Replace previous occurrence of string
- out = re.sub(r"(? 0.5)
- gt_boxes1 = torch.tensor([[1, 5, 2, 8], [7, 3, 10, 5]], dtype=torch.float32)
- gt_instance1 = Instances(image_shape)
- gt_instance1.gt_boxes = Boxes(gt_boxes1)
- gt_instance1.gt_classes = torch.tensor([1, 2])
- gt_instance1.gt_masks = BitMasks(torch.rand((2,) + image_shape) > 0.5)
- gt_instances = [gt_instance0, gt_instance1]
-
- proposal_generator = build_proposal_generator(cfg, feature_shape)
- roi_heads = StandardROIHeads(cfg, feature_shape)
-
- with EventStorage(): # capture events in a new storage to discard them
- proposals, proposal_losses = proposal_generator(images, features, gt_instances)
- _, detector_losses = roi_heads(images, features, proposals, gt_instances)
-
- detector_losses.update(proposal_losses)
- expected_losses = {
- "loss_cls": 4.5253729820251465,
- "loss_box_reg": 0.009785720147192478,
- "loss_mask": 0.693184494972229,
- "loss_rpn_cls": 0.08186662942171097,
- "loss_rpn_loc": 0.1104838103055954,
- }
- succ = all(
- torch.allclose(detector_losses[name], torch.tensor(expected_losses.get(name, 0.0)))
- for name in detector_losses.keys()
- )
- self.assertTrue(
- succ,
- "Losses has changed! New losses: {}".format(
- {k: v.item() for k, v in detector_losses.items()}
- ),
- )
-
- def test_rroi_heads(self):
- torch.manual_seed(121)
- cfg = get_cfg()
- cfg.MODEL.PROPOSAL_GENERATOR.NAME = "RRPN"
- cfg.MODEL.ANCHOR_GENERATOR.NAME = "RotatedAnchorGenerator"
- cfg.MODEL.ROI_HEADS.NAME = "RROIHeads"
- cfg.MODEL.ROI_BOX_HEAD.NAME = "FastRCNNConvFCHead"
- cfg.MODEL.ROI_BOX_HEAD.NUM_FC = 2
- cfg.MODEL.RPN.BBOX_REG_WEIGHTS = (1, 1, 1, 1, 1)
- cfg.MODEL.RPN.HEAD_NAME = "StandardRPNHead"
- cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE = "ROIAlignRotated"
- cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_WEIGHTS = (10, 10, 5, 5, 1)
- num_images = 2
- images_tensor = torch.rand(num_images, 20, 30)
- image_sizes = [(10, 10), (20, 30)]
- images = ImageList(images_tensor, image_sizes)
- num_channels = 1024
- features = {"res4": torch.rand(num_images, num_channels, 1, 2)}
- feature_shape = {"res4": ShapeSpec(channels=num_channels, stride=16)}
-
- image_shape = (15, 15)
- gt_boxes0 = torch.tensor([[2, 2, 2, 2, 30], [4, 4, 4, 4, 0]], dtype=torch.float32)
- gt_instance0 = Instances(image_shape)
- gt_instance0.gt_boxes = RotatedBoxes(gt_boxes0)
- gt_instance0.gt_classes = torch.tensor([2, 1])
- gt_boxes1 = torch.tensor([[1.5, 5.5, 1, 3, 0], [8.5, 4, 3, 2, -50]], dtype=torch.float32)
- gt_instance1 = Instances(image_shape)
- gt_instance1.gt_boxes = RotatedBoxes(gt_boxes1)
- gt_instance1.gt_classes = torch.tensor([1, 2])
- gt_instances = [gt_instance0, gt_instance1]
-
- proposal_generator = build_proposal_generator(cfg, feature_shape)
- roi_heads = build_roi_heads(cfg, feature_shape)
-
- with EventStorage(): # capture events in a new storage to discard them
- proposals, proposal_losses = proposal_generator(images, features, gt_instances)
- _, detector_losses = roi_heads(images, features, proposals, gt_instances)
-
- detector_losses.update(proposal_losses)
- expected_losses = {
- "loss_cls": 4.365657806396484,
- "loss_box_reg": 0.0015851043863222003,
- "loss_rpn_cls": 0.2427729219198227,
- "loss_rpn_loc": 0.3646621108055115,
- }
- succ = all(
- torch.allclose(detector_losses[name], torch.tensor(expected_losses.get(name, 0.0)))
- for name in detector_losses.keys()
- )
- self.assertTrue(
- succ,
- "Losses has changed! New losses: {}".format(
- {k: v.item() for k, v in detector_losses.items()}
- ),
- )
-
- def test_box_head_scriptability(self):
- input_shape = ShapeSpec(channels=1024, height=14, width=14)
- box_features = torch.randn(4, 1024, 14, 14)
-
- box_head = FastRCNNConvFCHead(
- input_shape, conv_dims=[512, 512], fc_dims=[1024, 1024]
- ).eval()
- script_box_head = torch.jit.script(box_head)
-
- origin_output = box_head(box_features)
- script_output = script_box_head(box_features)
- self.assertTrue(torch.equal(origin_output, script_output))
-
- def test_mask_head_scriptability(self):
- input_shape = ShapeSpec(channels=1024)
- mask_features = torch.randn(4, 1024, 14, 14)
-
- image_shapes = [(10, 10), (15, 15)]
- pred_instance0 = Instances(image_shapes[0])
- pred_classes0 = torch.tensor([1, 2, 3], dtype=torch.int64)
- pred_instance0.pred_classes = pred_classes0
- pred_instance1 = Instances(image_shapes[1])
- pred_classes1 = torch.tensor([4], dtype=torch.int64)
- pred_instance1.pred_classes = pred_classes1
-
- mask_head = MaskRCNNConvUpsampleHead(
- input_shape, num_classes=80, conv_dims=[256, 256]
- ).eval()
- # pred_instance will be in-place changed during the inference
- # process of `MaskRCNNConvUpsampleHead`
- origin_outputs = mask_head(mask_features, deepcopy([pred_instance0, pred_instance1]))
-
- fields = {"pred_masks": torch.Tensor, "pred_classes": torch.Tensor}
- with freeze_training_mode(mask_head), patch_instances(fields) as NewInstances:
- sciript_mask_head = torch.jit.script(mask_head)
- pred_instance0 = NewInstances.from_instances(pred_instance0)
- pred_instance1 = NewInstances.from_instances(pred_instance1)
- script_outputs = sciript_mask_head(mask_features, [pred_instance0, pred_instance1])
-
- for origin_ins, script_ins in zip(origin_outputs, script_outputs):
- assert_instances_allclose(origin_ins, script_ins, rtol=0)
-
- def test_keypoint_head_scriptability(self):
- input_shape = ShapeSpec(channels=1024, height=14, width=14)
- keypoint_features = torch.randn(4, 1024, 14, 14)
-
- image_shapes = [(10, 10), (15, 15)]
- pred_boxes0 = torch.tensor([[1, 1, 3, 3], [2, 2, 6, 6], [1, 5, 2, 8]], dtype=torch.float32)
- pred_instance0 = Instances(image_shapes[0])
- pred_instance0.pred_boxes = Boxes(pred_boxes0)
- pred_boxes1 = torch.tensor([[7, 3, 10, 5]], dtype=torch.float32)
- pred_instance1 = Instances(image_shapes[1])
- pred_instance1.pred_boxes = Boxes(pred_boxes1)
-
- keypoint_head = KRCNNConvDeconvUpsampleHead(
- input_shape, num_keypoints=17, conv_dims=[512, 512]
- ).eval()
- origin_outputs = keypoint_head(
- keypoint_features, deepcopy([pred_instance0, pred_instance1])
- )
-
- fields = {
- "pred_boxes": Boxes,
- "pred_keypoints": torch.Tensor,
- "pred_keypoint_heatmaps": torch.Tensor,
- }
- with freeze_training_mode(keypoint_head), patch_instances(fields) as NewInstances:
- script_keypoint_head = torch.jit.script(keypoint_head)
- pred_instance0 = NewInstances.from_instances(pred_instance0)
- pred_instance1 = NewInstances.from_instances(pred_instance1)
- script_outputs = script_keypoint_head(
- keypoint_features, [pred_instance0, pred_instance1]
- )
-
- for origin_ins, script_ins in zip(origin_outputs, script_outputs):
- assert_instances_allclose(origin_ins, script_ins, rtol=0)
-
- def test_StandardROIHeads_scriptability(self):
- cfg = get_cfg()
- cfg.MODEL.ROI_BOX_HEAD.NAME = "FastRCNNConvFCHead"
- cfg.MODEL.ROI_BOX_HEAD.NUM_FC = 2
- cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE = "ROIAlignV2"
- cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_WEIGHTS = (10, 10, 5, 5)
- cfg.MODEL.MASK_ON = True
- cfg.MODEL.ROI_HEADS.NMS_THRESH_TEST = 0.01
- cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.01
- num_images = 2
- images_tensor = torch.rand(num_images, 20, 30)
- image_sizes = [(10, 10), (20, 30)]
- images = ImageList(images_tensor, image_sizes)
- num_channels = 1024
- features = {"res4": torch.rand(num_images, num_channels, 1, 2)}
- feature_shape = {"res4": ShapeSpec(channels=num_channels, stride=16)}
-
- roi_heads = StandardROIHeads(cfg, feature_shape).eval()
-
- proposal0 = Instances(image_sizes[0])
- proposal_boxes0 = torch.tensor([[1, 1, 3, 3], [2, 2, 6, 6]], dtype=torch.float32)
- proposal0.proposal_boxes = Boxes(proposal_boxes0)
- proposal0.objectness_logits = torch.tensor([0.5, 0.7], dtype=torch.float32)
-
- proposal1 = Instances(image_sizes[1])
- proposal_boxes1 = torch.tensor([[1, 5, 2, 8], [7, 3, 10, 5]], dtype=torch.float32)
- proposal1.proposal_boxes = Boxes(proposal_boxes1)
- proposal1.objectness_logits = torch.tensor([0.1, 0.9], dtype=torch.float32)
- proposals = [proposal0, proposal1]
-
- pred_instances, _ = roi_heads(images, features, proposals)
- fields = {
- "objectness_logits": torch.Tensor,
- "proposal_boxes": Boxes,
- "pred_classes": torch.Tensor,
- "scores": torch.Tensor,
- "pred_masks": torch.Tensor,
- "pred_boxes": Boxes,
- "pred_keypoints": torch.Tensor,
- "pred_keypoint_heatmaps": torch.Tensor,
- }
- with freeze_training_mode(roi_heads), patch_instances(fields) as new_instances:
- proposal0 = new_instances.from_instances(proposal0)
- proposal1 = new_instances.from_instances(proposal1)
- proposals = [proposal0, proposal1]
- scripted_rot_heads = torch.jit.script(roi_heads)
- scripted_pred_instances, _ = scripted_rot_heads(images, features, proposals)
-
- for instance, scripted_instance in zip(pred_instances, scripted_pred_instances):
- assert_instances_allclose(instance, scripted_instance, rtol=0)
-
- def test_PointRend_mask_head_tracing(self):
- cfg = model_zoo.get_config("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml")
- point_rend.add_pointrend_config(cfg)
- cfg.MODEL.ROI_HEADS.IN_FEATURES = ["p2", "p3"]
- cfg.MODEL.ROI_MASK_HEAD.NAME = "PointRendMaskHead"
- cfg.MODEL.ROI_MASK_HEAD.POOLER_TYPE = ""
- cfg.MODEL.ROI_MASK_HEAD.POINT_HEAD_ON = True
- chan = 256
- head = point_rend.PointRendMaskHead(
- cfg,
- {
- "p2": ShapeSpec(channels=chan, stride=4),
- "p3": ShapeSpec(channels=chan, stride=8),
- },
- )
-
- def gen_inputs(h, w, N):
- p2 = torch.rand(1, chan, h, w)
- p3 = torch.rand(1, chan, h // 2, w // 2)
- boxes = random_boxes(N, max_coord=h)
- return p2, p3, boxes
-
- class Wrap(nn.ModuleDict):
- def forward(self, p2, p3, boxes):
- features = {
- "p2": p2,
- "p3": p3,
- }
- inst = Instances((p2.shape[2] * 4, p2.shape[3] * 4))
- inst.pred_boxes = Boxes(boxes)
- inst.pred_classes = torch.zeros(inst.__len__(), dtype=torch.long)
- out = self.head(features, [inst])[0]
- return out.pred_masks
-
- model = Wrap({"head": head})
- model.eval()
- with torch.no_grad(), patch_builtin_len():
- traced = torch.jit.trace(model, gen_inputs(302, 208, 20))
- inputs = gen_inputs(100, 120, 30)
- out_eager = model(*inputs)
- out_trace = traced(*inputs)
- self.assertTrue(torch.allclose(out_eager, out_trace))
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/ntt123/mnist-rnn/index.html b/spaces/ntt123/mnist-rnn/index.html
deleted file mode 100644
index 048031260e442abc269ad1148e884a57e81b3b84..0000000000000000000000000000000000000000
--- a/spaces/ntt123/mnist-rnn/index.html
+++ /dev/null
@@ -1,14 +0,0 @@
-
-
-
-
-
- mnist-rnn
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/owaiskha9654/Custom_Yolov7/models/yolo.py b/spaces/owaiskha9654/Custom_Yolov7/models/yolo.py
deleted file mode 100644
index 95a019c6aeec8c3f1d582907d5fe7ff3ed6b9369..0000000000000000000000000000000000000000
--- a/spaces/owaiskha9654/Custom_Yolov7/models/yolo.py
+++ /dev/null
@@ -1,843 +0,0 @@
-import argparse
-import logging
-import sys
-from copy import deepcopy
-
-sys.path.append('./') # to run '$ python *.py' files in subdirectories
-logger = logging.getLogger(__name__)
-import torch
-from models.common import *
-from models.experimental import *
-from utils.autoanchor import check_anchor_order
-from utils.general import make_divisible, check_file, set_logging
-from utils.torch_utils import time_synchronized, fuse_conv_and_bn, model_info, scale_img, initialize_weights, \
- select_device, copy_attr
-from utils.loss import SigmoidBin
-
-try:
- import thop # for FLOPS computation
-except ImportError:
- thop = None
-
-
-class Detect(nn.Module):
- stride = None # strides computed during build
- export = False # onnx export
- end2end = False
- include_nms = False
- concat = False
-
- def __init__(self, nc=80, anchors=(), ch=()): # detection layer
- super(Detect, self).__init__()
- self.nc = nc # number of classes
- self.no = nc + 5 # number of outputs per anchor
- self.nl = len(anchors) # number of detection layers
- self.na = len(anchors[0]) // 2 # number of anchors
- self.grid = [torch.zeros(1)] * self.nl # init grid
- a = torch.tensor(anchors).float().view(self.nl, -1, 2)
- self.register_buffer('anchors', a) # shape(nl,na,2)
- self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2)
- self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv
-
- def forward(self, x):
- # x = x.copy() # for profiling
- z = [] # inference output
- self.training |= self.export
- for i in range(self.nl):
- x[i] = self.m[i](x[i]) # conv
- bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
- x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
-
- if not self.training: # inference
- if self.grid[i].shape[2:4] != x[i].shape[2:4]:
- self.grid[i] = self._make_grid(nx, ny).to(x[i].device)
- y = x[i].sigmoid()
- if not torch.onnx.is_in_onnx_export():
- y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
- y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
- else:
- xy, wh, conf = y.split((2, 2, self.nc + 1), 4) # y.tensor_split((2, 4, 5), 4) # torch 1.8.0
- xy = xy * (2. * self.stride[i]) + (self.stride[i] * (self.grid[i] - 0.5)) # new xy
- wh = wh ** 2 * (4 * self.anchor_grid[i].data) # new wh
- y = torch.cat((xy, wh, conf), 4)
- z.append(y.view(bs, -1, self.no))
-
- if self.training:
- out = x
- elif self.end2end:
- out = torch.cat(z, 1)
- elif self.include_nms:
- z = self.convert(z)
- out = (z, )
- elif self.concat:
- out = torch.cat(z, 1)
- else:
- out = (torch.cat(z, 1), x)
-
- return out
-
- @staticmethod
- def _make_grid(nx=20, ny=20):
- yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)])
- return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float()
-
- def convert(self, z):
- z = torch.cat(z, 1)
- box = z[:, :, :4]
- conf = z[:, :, 4:5]
- score = z[:, :, 5:]
- score *= conf
- convert_matrix = torch.tensor([[1, 0, 1, 0], [0, 1, 0, 1], [-0.5, 0, 0.5, 0], [0, -0.5, 0, 0.5]],
- dtype=torch.float32,
- device=z.device)
- box @= convert_matrix
- return (box, score)
-
-
-class IDetect(nn.Module):
- stride = None # strides computed during build
- export = False # onnx export
- end2end = False
- include_nms = False
- concat = False
-
- def __init__(self, nc=80, anchors=(), ch=()): # detection layer
- super(IDetect, self).__init__()
- self.nc = nc # number of classes
- self.no = nc + 5 # number of outputs per anchor
- self.nl = len(anchors) # number of detection layers
- self.na = len(anchors[0]) // 2 # number of anchors
- self.grid = [torch.zeros(1)] * self.nl # init grid
- a = torch.tensor(anchors).float().view(self.nl, -1, 2)
- self.register_buffer('anchors', a) # shape(nl,na,2)
- self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2)
- self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv
-
- self.ia = nn.ModuleList(ImplicitA(x) for x in ch)
- self.im = nn.ModuleList(ImplicitM(self.no * self.na) for _ in ch)
-
- def forward(self, x):
- # x = x.copy() # for profiling
- z = [] # inference output
- self.training |= self.export
- for i in range(self.nl):
- x[i] = self.m[i](self.ia[i](x[i])) # conv
- x[i] = self.im[i](x[i])
- bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
- x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
-
- if not self.training: # inference
- if self.grid[i].shape[2:4] != x[i].shape[2:4]:
- self.grid[i] = self._make_grid(nx, ny).to(x[i].device)
-
- y = x[i].sigmoid()
- y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
- y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
- z.append(y.view(bs, -1, self.no))
-
- return x if self.training else (torch.cat(z, 1), x)
-
- def fuseforward(self, x):
- # x = x.copy() # for profiling
- z = [] # inference output
- self.training |= self.export
- for i in range(self.nl):
- x[i] = self.m[i](x[i]) # conv
- bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
- x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
-
- if not self.training: # inference
- if self.grid[i].shape[2:4] != x[i].shape[2:4]:
- self.grid[i] = self._make_grid(nx, ny).to(x[i].device)
-
- y = x[i].sigmoid()
- if not torch.onnx.is_in_onnx_export():
- y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
- y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
- else:
- xy, wh, conf = y.split((2, 2, self.nc + 1), 4) # y.tensor_split((2, 4, 5), 4) # torch 1.8.0
- xy = xy * (2. * self.stride[i]) + (self.stride[i] * (self.grid[i] - 0.5)) # new xy
- wh = wh ** 2 * (4 * self.anchor_grid[i].data) # new wh
- y = torch.cat((xy, wh, conf), 4)
- z.append(y.view(bs, -1, self.no))
-
- if self.training:
- out = x
- elif self.end2end:
- out = torch.cat(z, 1)
- elif self.include_nms:
- z = self.convert(z)
- out = (z, )
- elif self.concat:
- out = torch.cat(z, 1)
- else:
- out = (torch.cat(z, 1), x)
-
- return out
-
- def fuse(self):
- print("IDetect.fuse")
- # fuse ImplicitA and Convolution
- for i in range(len(self.m)):
- c1,c2,_,_ = self.m[i].weight.shape
- c1_,c2_, _,_ = self.ia[i].implicit.shape
- self.m[i].bias += torch.matmul(self.m[i].weight.reshape(c1,c2),self.ia[i].implicit.reshape(c2_,c1_)).squeeze(1)
-
- # fuse ImplicitM and Convolution
- for i in range(len(self.m)):
- c1,c2, _,_ = self.im[i].implicit.shape
- self.m[i].bias *= self.im[i].implicit.reshape(c2)
- self.m[i].weight *= self.im[i].implicit.transpose(0,1)
-
- @staticmethod
- def _make_grid(nx=20, ny=20):
- yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)])
- return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float()
-
- def convert(self, z):
- z = torch.cat(z, 1)
- box = z[:, :, :4]
- conf = z[:, :, 4:5]
- score = z[:, :, 5:]
- score *= conf
- convert_matrix = torch.tensor([[1, 0, 1, 0], [0, 1, 0, 1], [-0.5, 0, 0.5, 0], [0, -0.5, 0, 0.5]],
- dtype=torch.float32,
- device=z.device)
- box @= convert_matrix
- return (box, score)
-
-
-class IKeypoint(nn.Module):
- stride = None # strides computed during build
- export = False # onnx export
-
- def __init__(self, nc=80, anchors=(), nkpt=17, ch=(), inplace=True, dw_conv_kpt=False): # detection layer
- super(IKeypoint, self).__init__()
- self.nc = nc # number of classes
- self.nkpt = nkpt
- self.dw_conv_kpt = dw_conv_kpt
- self.no_det=(nc + 5) # number of outputs per anchor for box and class
- self.no_kpt = 3*self.nkpt ## number of outputs per anchor for keypoints
- self.no = self.no_det+self.no_kpt
- self.nl = len(anchors) # number of detection layers
- self.na = len(anchors[0]) // 2 # number of anchors
- self.grid = [torch.zeros(1)] * self.nl # init grid
- self.flip_test = False
- a = torch.tensor(anchors).float().view(self.nl, -1, 2)
- self.register_buffer('anchors', a) # shape(nl,na,2)
- self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2)
- self.m = nn.ModuleList(nn.Conv2d(x, self.no_det * self.na, 1) for x in ch) # output conv
-
- self.ia = nn.ModuleList(ImplicitA(x) for x in ch)
- self.im = nn.ModuleList(ImplicitM(self.no_det * self.na) for _ in ch)
-
- if self.nkpt is not None:
- if self.dw_conv_kpt: #keypoint head is slightly more complex
- self.m_kpt = nn.ModuleList(
- nn.Sequential(DWConv(x, x, k=3), Conv(x,x),
- DWConv(x, x, k=3), Conv(x, x),
- DWConv(x, x, k=3), Conv(x,x),
- DWConv(x, x, k=3), Conv(x, x),
- DWConv(x, x, k=3), Conv(x, x),
- DWConv(x, x, k=3), nn.Conv2d(x, self.no_kpt * self.na, 1)) for x in ch)
- else: #keypoint head is a single convolution
- self.m_kpt = nn.ModuleList(nn.Conv2d(x, self.no_kpt * self.na, 1) for x in ch)
-
- self.inplace = inplace # use in-place ops (e.g. slice assignment)
-
- def forward(self, x):
- # x = x.copy() # for profiling
- z = [] # inference output
- self.training |= self.export
- for i in range(self.nl):
- if self.nkpt is None or self.nkpt==0:
- x[i] = self.im[i](self.m[i](self.ia[i](x[i]))) # conv
- else :
- x[i] = torch.cat((self.im[i](self.m[i](self.ia[i](x[i]))), self.m_kpt[i](x[i])), axis=1)
-
- bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
- x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
- x_det = x[i][..., :6]
- x_kpt = x[i][..., 6:]
-
- if not self.training: # inference
- if self.grid[i].shape[2:4] != x[i].shape[2:4]:
- self.grid[i] = self._make_grid(nx, ny).to(x[i].device)
- kpt_grid_x = self.grid[i][..., 0:1]
- kpt_grid_y = self.grid[i][..., 1:2]
-
- if self.nkpt == 0:
- y = x[i].sigmoid()
- else:
- y = x_det.sigmoid()
-
- if self.inplace:
- xy = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
- wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i].view(1, self.na, 1, 1, 2) # wh
- if self.nkpt != 0:
- x_kpt[..., 0::3] = (x_kpt[..., ::3] * 2. - 0.5 + kpt_grid_x.repeat(1,1,1,1,17)) * self.stride[i] # xy
- x_kpt[..., 1::3] = (x_kpt[..., 1::3] * 2. - 0.5 + kpt_grid_y.repeat(1,1,1,1,17)) * self.stride[i] # xy
- #x_kpt[..., 0::3] = (x_kpt[..., ::3] + kpt_grid_x.repeat(1,1,1,1,17)) * self.stride[i] # xy
- #x_kpt[..., 1::3] = (x_kpt[..., 1::3] + kpt_grid_y.repeat(1,1,1,1,17)) * self.stride[i] # xy
- #print('=============')
- #print(self.anchor_grid[i].shape)
- #print(self.anchor_grid[i][...,0].unsqueeze(4).shape)
- #print(x_kpt[..., 0::3].shape)
- #x_kpt[..., 0::3] = ((x_kpt[..., 0::3].tanh() * 2.) ** 3 * self.anchor_grid[i][...,0].unsqueeze(4).repeat(1,1,1,1,self.nkpt)) + kpt_grid_x.repeat(1,1,1,1,17) * self.stride[i] # xy
- #x_kpt[..., 1::3] = ((x_kpt[..., 1::3].tanh() * 2.) ** 3 * self.anchor_grid[i][...,1].unsqueeze(4).repeat(1,1,1,1,self.nkpt)) + kpt_grid_y.repeat(1,1,1,1,17) * self.stride[i] # xy
- #x_kpt[..., 0::3] = (((x_kpt[..., 0::3].sigmoid() * 4.) ** 2 - 8.) * self.anchor_grid[i][...,0].unsqueeze(4).repeat(1,1,1,1,self.nkpt)) + kpt_grid_x.repeat(1,1,1,1,17) * self.stride[i] # xy
- #x_kpt[..., 1::3] = (((x_kpt[..., 1::3].sigmoid() * 4.) ** 2 - 8.) * self.anchor_grid[i][...,1].unsqueeze(4).repeat(1,1,1,1,self.nkpt)) + kpt_grid_y.repeat(1,1,1,1,17) * self.stride[i] # xy
- x_kpt[..., 2::3] = x_kpt[..., 2::3].sigmoid()
-
- y = torch.cat((xy, wh, y[..., 4:], x_kpt), dim = -1)
-
- else: # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953
- xy = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
- wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
- if self.nkpt != 0:
- y[..., 6:] = (y[..., 6:] * 2. - 0.5 + self.grid[i].repeat((1,1,1,1,self.nkpt))) * self.stride[i] # xy
- y = torch.cat((xy, wh, y[..., 4:]), -1)
-
- z.append(y.view(bs, -1, self.no))
-
- return x if self.training else (torch.cat(z, 1), x)
-
- @staticmethod
- def _make_grid(nx=20, ny=20):
- yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)])
- return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float()
-
-
-class IAuxDetect(nn.Module):
- stride = None # strides computed during build
- export = False # onnx export
- end2end = False
- include_nms = False
- concat = False
-
- def __init__(self, nc=80, anchors=(), ch=()): # detection layer
- super(IAuxDetect, self).__init__()
- self.nc = nc # number of classes
- self.no = nc + 5 # number of outputs per anchor
- self.nl = len(anchors) # number of detection layers
- self.na = len(anchors[0]) // 2 # number of anchors
- self.grid = [torch.zeros(1)] * self.nl # init grid
- a = torch.tensor(anchors).float().view(self.nl, -1, 2)
- self.register_buffer('anchors', a) # shape(nl,na,2)
- self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2)
- self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch[:self.nl]) # output conv
- self.m2 = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch[self.nl:]) # output conv
-
- self.ia = nn.ModuleList(ImplicitA(x) for x in ch[:self.nl])
- self.im = nn.ModuleList(ImplicitM(self.no * self.na) for _ in ch[:self.nl])
-
- def forward(self, x):
- # x = x.copy() # for profiling
- z = [] # inference output
- self.training |= self.export
- for i in range(self.nl):
- x[i] = self.m[i](self.ia[i](x[i])) # conv
- x[i] = self.im[i](x[i])
- bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
- x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
-
- x[i+self.nl] = self.m2[i](x[i+self.nl])
- x[i+self.nl] = x[i+self.nl].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
-
- if not self.training: # inference
- if self.grid[i].shape[2:4] != x[i].shape[2:4]:
- self.grid[i] = self._make_grid(nx, ny).to(x[i].device)
-
- y = x[i].sigmoid()
- if not torch.onnx.is_in_onnx_export():
- y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
- y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
- else:
- xy, wh, conf = y.split((2, 2, self.nc + 1), 4) # y.tensor_split((2, 4, 5), 4) # torch 1.8.0
- xy = xy * (2. * self.stride[i]) + (self.stride[i] * (self.grid[i] - 0.5)) # new xy
- wh = wh ** 2 * (4 * self.anchor_grid[i].data) # new wh
- y = torch.cat((xy, wh, conf), 4)
- z.append(y.view(bs, -1, self.no))
-
- return x if self.training else (torch.cat(z, 1), x[:self.nl])
-
- def fuseforward(self, x):
- # x = x.copy() # for profiling
- z = [] # inference output
- self.training |= self.export
- for i in range(self.nl):
- x[i] = self.m[i](x[i]) # conv
- bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
- x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
-
- if not self.training: # inference
- if self.grid[i].shape[2:4] != x[i].shape[2:4]:
- self.grid[i] = self._make_grid(nx, ny).to(x[i].device)
-
- y = x[i].sigmoid()
- if not torch.onnx.is_in_onnx_export():
- y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
- y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
- else:
- xy = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
- wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i].data # wh
- y = torch.cat((xy, wh, y[..., 4:]), -1)
- z.append(y.view(bs, -1, self.no))
-
- if self.training:
- out = x
- elif self.end2end:
- out = torch.cat(z, 1)
- elif self.include_nms:
- z = self.convert(z)
- out = (z, )
- elif self.concat:
- out = torch.cat(z, 1)
- else:
- out = (torch.cat(z, 1), x)
-
- return out
-
- def fuse(self):
- print("IAuxDetect.fuse")
- # fuse ImplicitA and Convolution
- for i in range(len(self.m)):
- c1,c2,_,_ = self.m[i].weight.shape
- c1_,c2_, _,_ = self.ia[i].implicit.shape
- self.m[i].bias += torch.matmul(self.m[i].weight.reshape(c1,c2),self.ia[i].implicit.reshape(c2_,c1_)).squeeze(1)
-
- # fuse ImplicitM and Convolution
- for i in range(len(self.m)):
- c1,c2, _,_ = self.im[i].implicit.shape
- self.m[i].bias *= self.im[i].implicit.reshape(c2)
- self.m[i].weight *= self.im[i].implicit.transpose(0,1)
-
- @staticmethod
- def _make_grid(nx=20, ny=20):
- yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)])
- return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float()
-
- def convert(self, z):
- z = torch.cat(z, 1)
- box = z[:, :, :4]
- conf = z[:, :, 4:5]
- score = z[:, :, 5:]
- score *= conf
- convert_matrix = torch.tensor([[1, 0, 1, 0], [0, 1, 0, 1], [-0.5, 0, 0.5, 0], [0, -0.5, 0, 0.5]],
- dtype=torch.float32,
- device=z.device)
- box @= convert_matrix
- return (box, score)
-
-
-class IBin(nn.Module):
- stride = None # strides computed during build
- export = False # onnx export
-
- def __init__(self, nc=80, anchors=(), ch=(), bin_count=21): # detection layer
- super(IBin, self).__init__()
- self.nc = nc # number of classes
- self.bin_count = bin_count
-
- self.w_bin_sigmoid = SigmoidBin(bin_count=self.bin_count, min=0.0, max=4.0)
- self.h_bin_sigmoid = SigmoidBin(bin_count=self.bin_count, min=0.0, max=4.0)
- # classes, x,y,obj
- self.no = nc + 3 + \
- self.w_bin_sigmoid.get_length() + self.h_bin_sigmoid.get_length() # w-bce, h-bce
- # + self.x_bin_sigmoid.get_length() + self.y_bin_sigmoid.get_length()
-
- self.nl = len(anchors) # number of detection layers
- self.na = len(anchors[0]) // 2 # number of anchors
- self.grid = [torch.zeros(1)] * self.nl # init grid
- a = torch.tensor(anchors).float().view(self.nl, -1, 2)
- self.register_buffer('anchors', a) # shape(nl,na,2)
- self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2)
- self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv
-
- self.ia = nn.ModuleList(ImplicitA(x) for x in ch)
- self.im = nn.ModuleList(ImplicitM(self.no * self.na) for _ in ch)
-
- def forward(self, x):
-
- #self.x_bin_sigmoid.use_fw_regression = True
- #self.y_bin_sigmoid.use_fw_regression = True
- self.w_bin_sigmoid.use_fw_regression = True
- self.h_bin_sigmoid.use_fw_regression = True
-
- # x = x.copy() # for profiling
- z = [] # inference output
- self.training |= self.export
- for i in range(self.nl):
- x[i] = self.m[i](self.ia[i](x[i])) # conv
- x[i] = self.im[i](x[i])
- bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
- x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
-
- if not self.training: # inference
- if self.grid[i].shape[2:4] != x[i].shape[2:4]:
- self.grid[i] = self._make_grid(nx, ny).to(x[i].device)
-
- y = x[i].sigmoid()
- y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
- #y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
-
-
- #px = (self.x_bin_sigmoid.forward(y[..., 0:12]) + self.grid[i][..., 0]) * self.stride[i]
- #py = (self.y_bin_sigmoid.forward(y[..., 12:24]) + self.grid[i][..., 1]) * self.stride[i]
-
- pw = self.w_bin_sigmoid.forward(y[..., 2:24]) * self.anchor_grid[i][..., 0]
- ph = self.h_bin_sigmoid.forward(y[..., 24:46]) * self.anchor_grid[i][..., 1]
-
- #y[..., 0] = px
- #y[..., 1] = py
- y[..., 2] = pw
- y[..., 3] = ph
-
- y = torch.cat((y[..., 0:4], y[..., 46:]), dim=-1)
-
- z.append(y.view(bs, -1, y.shape[-1]))
-
- return x if self.training else (torch.cat(z, 1), x)
-
- @staticmethod
- def _make_grid(nx=20, ny=20):
- yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)])
- return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float()
-
-
-class Model(nn.Module):
- def __init__(self, cfg='yolor-csp-c.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes
- super(Model, self).__init__()
- self.traced = False
- if isinstance(cfg, dict):
- self.yaml = cfg # model dict
- else: # is *.yaml
- import yaml # for torch hub
- self.yaml_file = Path(cfg).name
- with open(cfg) as f:
- self.yaml = yaml.load(f, Loader=yaml.SafeLoader) # model dict
-
- # Define model
- ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels
- if nc and nc != self.yaml['nc']:
- logger.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}")
- self.yaml['nc'] = nc # override yaml value
- if anchors:
- logger.info(f'Overriding model.yaml anchors with anchors={anchors}')
- self.yaml['anchors'] = round(anchors) # override yaml value
- self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist
- self.names = [str(i) for i in range(self.yaml['nc'])] # default names
- # print([x.shape for x in self.forward(torch.zeros(1, ch, 64, 64))])
-
- # Build strides, anchors
- m = self.model[-1] # Detect()
- if isinstance(m, Detect):
- s = 256 # 2x min stride
- m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
- check_anchor_order(m)
- m.anchors /= m.stride.view(-1, 1, 1)
- self.stride = m.stride
- self._initialize_biases() # only run once
- # print('Strides: %s' % m.stride.tolist())
- if isinstance(m, IDetect):
- s = 256 # 2x min stride
- m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
- check_anchor_order(m)
- m.anchors /= m.stride.view(-1, 1, 1)
- self.stride = m.stride
- self._initialize_biases() # only run once
- # print('Strides: %s' % m.stride.tolist())
- if isinstance(m, IAuxDetect):
- s = 256 # 2x min stride
- m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))[:4]]) # forward
- #print(m.stride)
- check_anchor_order(m)
- m.anchors /= m.stride.view(-1, 1, 1)
- self.stride = m.stride
- self._initialize_aux_biases() # only run once
- # print('Strides: %s' % m.stride.tolist())
- if isinstance(m, IBin):
- s = 256 # 2x min stride
- m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
- check_anchor_order(m)
- m.anchors /= m.stride.view(-1, 1, 1)
- self.stride = m.stride
- self._initialize_biases_bin() # only run once
- # print('Strides: %s' % m.stride.tolist())
- if isinstance(m, IKeypoint):
- s = 256 # 2x min stride
- m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
- check_anchor_order(m)
- m.anchors /= m.stride.view(-1, 1, 1)
- self.stride = m.stride
- self._initialize_biases_kpt() # only run once
- # print('Strides: %s' % m.stride.tolist())
-
- # Init weights, biases
- initialize_weights(self)
- self.info()
- logger.info('')
-
- def forward(self, x, augment=False, profile=False):
- if augment:
- img_size = x.shape[-2:] # height, width
- s = [1, 0.83, 0.67] # scales
- f = [None, 3, None] # flips (2-ud, 3-lr)
- y = [] # outputs
- for si, fi in zip(s, f):
- xi = scale_img(x.flip(fi) if fi else x, si, gs=int(self.stride.max()))
- yi = self.forward_once(xi)[0] # forward
- # cv2.imwrite(f'img_{si}.jpg', 255 * xi[0].cpu().numpy().transpose((1, 2, 0))[:, :, ::-1]) # save
- yi[..., :4] /= si # de-scale
- if fi == 2:
- yi[..., 1] = img_size[0] - yi[..., 1] # de-flip ud
- elif fi == 3:
- yi[..., 0] = img_size[1] - yi[..., 0] # de-flip lr
- y.append(yi)
- return torch.cat(y, 1), None # augmented inference, train
- else:
- return self.forward_once(x, profile) # single-scale inference, train
-
- def forward_once(self, x, profile=False):
- y, dt = [], [] # outputs
- for m in self.model:
- if m.f != -1: # if not from previous layer
- x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers
-
- if not hasattr(self, 'traced'):
- self.traced=False
-
- if self.traced:
- if isinstance(m, Detect) or isinstance(m, IDetect) or isinstance(m, IAuxDetect) or isinstance(m, IKeypoint):
- break
-
- if profile:
- c = isinstance(m, (Detect, IDetect, IAuxDetect, IBin))
- o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPS
- for _ in range(10):
- m(x.copy() if c else x)
- t = time_synchronized()
- for _ in range(10):
- m(x.copy() if c else x)
- dt.append((time_synchronized() - t) * 100)
- print('%10.1f%10.0f%10.1fms %-40s' % (o, m.np, dt[-1], m.type))
-
- x = m(x) # run
-
- y.append(x if m.i in self.save else None) # save output
-
- if profile:
- print('%.1fms total' % sum(dt))
- return x
-
- def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency
- # https://arxiv.org/abs/1708.02002 section 3.3
- # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1.
- m = self.model[-1] # Detect() module
- for mi, s in zip(m.m, m.stride): # from
- b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85)
- b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image)
- b.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls
- mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
-
- def _initialize_aux_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency
- # https://arxiv.org/abs/1708.02002 section 3.3
- # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1.
- m = self.model[-1] # Detect() module
- for mi, mi2, s in zip(m.m, m.m2, m.stride): # from
- b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85)
- b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image)
- b.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls
- mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
- b2 = mi2.bias.view(m.na, -1) # conv.bias(255) to (3,85)
- b2.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image)
- b2.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls
- mi2.bias = torch.nn.Parameter(b2.view(-1), requires_grad=True)
-
- def _initialize_biases_bin(self, cf=None): # initialize biases into Detect(), cf is class frequency
- # https://arxiv.org/abs/1708.02002 section 3.3
- # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1.
- m = self.model[-1] # Bin() module
- bc = m.bin_count
- for mi, s in zip(m.m, m.stride): # from
- b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85)
- old = b[:, (0,1,2,bc+3)].data
- obj_idx = 2*bc+4
- b[:, :obj_idx].data += math.log(0.6 / (bc + 1 - 0.99))
- b[:, obj_idx].data += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image)
- b[:, (obj_idx+1):].data += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls
- b[:, (0,1,2,bc+3)].data = old
- mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
-
- def _initialize_biases_kpt(self, cf=None): # initialize biases into Detect(), cf is class frequency
- # https://arxiv.org/abs/1708.02002 section 3.3
- # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1.
- m = self.model[-1] # Detect() module
- for mi, s in zip(m.m, m.stride): # from
- b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85)
- b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image)
- b.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls
- mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
-
- def _print_biases(self):
- m = self.model[-1] # Detect() module
- for mi in m.m: # from
- b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85)
- print(('%6g Conv2d.bias:' + '%10.3g' * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean()))
-
- # def _print_weights(self):
- # for m in self.model.modules():
- # if type(m) is Bottleneck:
- # print('%10.3g' % (m.w.detach().sigmoid() * 2)) # shortcut weights
-
- def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers
- print('Fusing layers... ')
- for m in self.model.modules():
- if isinstance(m, RepConv):
- #print(f" fuse_repvgg_block")
- m.fuse_repvgg_block()
- elif isinstance(m, RepConv_OREPA):
- #print(f" switch_to_deploy")
- m.switch_to_deploy()
- elif type(m) is Conv and hasattr(m, 'bn'):
- m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv
- delattr(m, 'bn') # remove batchnorm
- m.forward = m.fuseforward # update forward
- elif isinstance(m, (IDetect, IAuxDetect)):
- m.fuse()
- m.forward = m.fuseforward
- self.info()
- return self
-
- def nms(self, mode=True): # add or remove NMS module
- present = type(self.model[-1]) is NMS # last layer is NMS
- if mode and not present:
- print('Adding NMS... ')
- m = NMS() # module
- m.f = -1 # from
- m.i = self.model[-1].i + 1 # index
- self.model.add_module(name='%s' % m.i, module=m) # add
- self.eval()
- elif not mode and present:
- print('Removing NMS... ')
- self.model = self.model[:-1] # remove
- return self
-
- def autoshape(self): # add autoShape module
- print('Adding autoShape... ')
- m = autoShape(self) # wrap model
- copy_attr(m, self, include=('yaml', 'nc', 'hyp', 'names', 'stride'), exclude=()) # copy attributes
- return m
-
- def info(self, verbose=False, img_size=640): # print model information
- model_info(self, verbose, img_size)
-
-
-def parse_model(d, ch): # model_dict, input_channels(3)
- logger.info('\n%3s%18s%3s%10s %-40s%-30s' % ('', 'from', 'n', 'params', 'module', 'arguments'))
- anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple']
- na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors
- no = na * (nc + 5) # number of outputs = anchors * (classes + 5)
-
- layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out
- for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args
- m = eval(m) if isinstance(m, str) else m # eval strings
- for j, a in enumerate(args):
- try:
- args[j] = eval(a) if isinstance(a, str) else a # eval strings
- except:
- pass
-
- n = max(round(n * gd), 1) if n > 1 else n # depth gain
- if m in [nn.Conv2d, Conv, RobustConv, RobustConv2, DWConv, GhostConv, RepConv, RepConv_OREPA, DownC,
- SPP, SPPF, SPPCSPC, GhostSPPCSPC, MixConv2d, Focus, Stem, GhostStem, CrossConv,
- Bottleneck, BottleneckCSPA, BottleneckCSPB, BottleneckCSPC,
- RepBottleneck, RepBottleneckCSPA, RepBottleneckCSPB, RepBottleneckCSPC,
- Res, ResCSPA, ResCSPB, ResCSPC,
- RepRes, RepResCSPA, RepResCSPB, RepResCSPC,
- ResX, ResXCSPA, ResXCSPB, ResXCSPC,
- RepResX, RepResXCSPA, RepResXCSPB, RepResXCSPC,
- Ghost, GhostCSPA, GhostCSPB, GhostCSPC,
- SwinTransformerBlock, STCSPA, STCSPB, STCSPC,
- SwinTransformer2Block, ST2CSPA, ST2CSPB, ST2CSPC]:
- c1, c2 = ch[f], args[0]
- if c2 != no: # if not output
- c2 = make_divisible(c2 * gw, 8)
-
- args = [c1, c2, *args[1:]]
- if m in [DownC, SPPCSPC, GhostSPPCSPC,
- BottleneckCSPA, BottleneckCSPB, BottleneckCSPC,
- RepBottleneckCSPA, RepBottleneckCSPB, RepBottleneckCSPC,
- ResCSPA, ResCSPB, ResCSPC,
- RepResCSPA, RepResCSPB, RepResCSPC,
- ResXCSPA, ResXCSPB, ResXCSPC,
- RepResXCSPA, RepResXCSPB, RepResXCSPC,
- GhostCSPA, GhostCSPB, GhostCSPC,
- STCSPA, STCSPB, STCSPC,
- ST2CSPA, ST2CSPB, ST2CSPC]:
- args.insert(2, n) # number of repeats
- n = 1
- elif m is nn.BatchNorm2d:
- args = [ch[f]]
- elif m is Concat:
- c2 = sum([ch[x] for x in f])
- elif m is Chuncat:
- c2 = sum([ch[x] for x in f])
- elif m is Shortcut:
- c2 = ch[f[0]]
- elif m is Foldcut:
- c2 = ch[f] // 2
- elif m in [Detect, IDetect, IAuxDetect, IBin, IKeypoint]:
- args.append([ch[x] for x in f])
- if isinstance(args[1], int): # number of anchors
- args[1] = [list(range(args[1] * 2))] * len(f)
- elif m is ReOrg:
- c2 = ch[f] * 4
- elif m is Contract:
- c2 = ch[f] * args[0] ** 2
- elif m is Expand:
- c2 = ch[f] // args[0] ** 2
- else:
- c2 = ch[f]
-
- m_ = nn.Sequential(*[m(*args) for _ in range(n)]) if n > 1 else m(*args) # module
- t = str(m)[8:-2].replace('__main__.', '') # module type
- np = sum([x.numel() for x in m_.parameters()]) # number params
- m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params
- logger.info('%3s%18s%3s%10.0f %-40s%-30s' % (i, f, n, np, t, args)) # print
- save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist
- layers.append(m_)
- if i == 0:
- ch = []
- ch.append(c2)
- return nn.Sequential(*layers), sorted(save)
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--cfg', type=str, default='yolor-csp-c.yaml', help='model.yaml')
- parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- parser.add_argument('--profile', action='store_true', help='profile model speed')
- opt = parser.parse_args()
- opt.cfg = check_file(opt.cfg) # check file
- set_logging()
- device = select_device(opt.device)
-
- # Create model
- model = Model(opt.cfg).to(device)
- model.train()
-
- if opt.profile:
- img = torch.rand(1, 3, 640, 640).to(device)
- y = model(img, profile=True)
-
- # Profile
- # img = torch.rand(8 if torch.cuda.is_available() else 1, 3, 640, 640).to(device)
- # y = model(img, profile=True)
-
- # Tensorboard
- # from torch.utils.tensorboard import SummaryWriter
- # tb_writer = SummaryWriter()
- # print("Run 'tensorboard --logdir=models/runs' to view tensorboard at http://localhost:6006/")
- # tb_writer.add_graph(model.model, img) # add model to tensorboard
- # tb_writer.add_image('test', img[0], dataformats='CWH') # add model to tensorboard
diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/alt_diffusion/__init__.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/alt_diffusion/__init__.py
deleted file mode 100644
index c0ad3b4a3486d4d54aa68b1bf6b74f8c387f7f6a..0000000000000000000000000000000000000000
--- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/alt_diffusion/__init__.py
+++ /dev/null
@@ -1,52 +0,0 @@
-from typing import TYPE_CHECKING
-
-from ...utils import (
- OptionalDependencyNotAvailable,
- _LazyModule,
- get_objects_from_module,
- is_torch_available,
- is_transformers_available,
-)
-
-
-_dummy_objects = {}
-_import_structure = {}
-
-try:
- if not (is_transformers_available() and is_torch_available()):
- raise OptionalDependencyNotAvailable()
-except OptionalDependencyNotAvailable:
- from ...utils import dummy_torch_and_transformers_objects
-
- _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects))
-else:
- _import_structure["modeling_roberta_series"] = ["RobertaSeriesModelWithTransformation"]
- _import_structure["pipeline_alt_diffusion"] = ["AltDiffusionPipeline"]
- _import_structure["pipeline_alt_diffusion_img2img"] = ["AltDiffusionImg2ImgPipeline"]
-
- _import_structure["pipeline_output"] = ["AltDiffusionPipelineOutput"]
-
-if TYPE_CHECKING:
- try:
- if not (is_transformers_available() and is_torch_available()):
- raise OptionalDependencyNotAvailable()
- except OptionalDependencyNotAvailable:
- from ...utils.dummy_torch_and_transformers_objects import *
-
- else:
- from .modeling_roberta_series import RobertaSeriesModelWithTransformation
- from .pipeline_alt_diffusion import AltDiffusionPipeline
- from .pipeline_alt_diffusion_img2img import AltDiffusionImg2ImgPipeline
- from .pipeline_output import AltDiffusionPipelineOutput
-
-else:
- import sys
-
- sys.modules[__name__] = _LazyModule(
- __name__,
- globals()["__file__"],
- _import_structure,
- module_spec=__spec__,
- )
- for name, value in _dummy_objects.items():
- setattr(sys.modules[__name__], name, value)
diff --git a/spaces/parkyzh/bingo/src/components/chat-attachments.tsx b/spaces/parkyzh/bingo/src/components/chat-attachments.tsx
deleted file mode 100644
index ef43d4e262935d263b6099138c56f7daade5299d..0000000000000000000000000000000000000000
--- a/spaces/parkyzh/bingo/src/components/chat-attachments.tsx
+++ /dev/null
@@ -1,37 +0,0 @@
-import Image from 'next/image'
-import ClearIcon from '@/assets/images/clear.svg'
-import RefreshIcon from '@/assets/images/refresh.svg'
-import { FileItem } from '@/lib/bots/bing/types'
-import { cn } from '@/lib/utils'
-import { useBing } from '@/lib/hooks/use-bing'
-
-type ChatAttachmentsProps = Pick, 'attachmentList' | 'setAttachmentList' | 'uploadImage'>
-
-export function ChatAttachments({ attachmentList = [], setAttachmentList, uploadImage }: ChatAttachmentsProps) {
- return attachmentList.length ? (
-
- {attachmentList.map(file => (
-
- {file.status === 'loading' && (
-
-
-
)
- }
- {file.status !== 'error' && (
-
-
-
)
- }
- {file.status === 'error' && (
-
- uploadImage(file.url)} />
-
- )}
-
-
- ))}
-
- ) : null
-}
diff --git a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/transforms/__init__.py b/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/transforms/__init__.py
deleted file mode 100644
index c97a726db1b1f102fa6fe222bde48eef07fa7043..0000000000000000000000000000000000000000
--- a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/transforms/__init__.py
+++ /dev/null
@@ -1,76 +0,0 @@
-from pathlib import Path
-from typing import Callable, List, Optional, Tuple
-
-from monai.transforms import Compose
-
-from transforms.base import get_image_loading_transform, get_apply_crop_transform, get_stacking_transform
-from transforms.mask import get_mask_transform
-from transforms.coordinates import get_normalized_coordinates_transform
-from transforms.augmentation import *
-from transforms.backbone import *
-
-
-def _build_transforms_composition(hparams, transform_getters: List[Callable], *initial_args) -> Tuple[Compose, List[str]]:
- """
- Builds a transforms composition from the given functions, which take the hparams and loaded keys as arguments, and
- produce a Compose containing the desired transforms. The initialization function receives the provided initial arguments.
- """
- transforms = []
- keys = []
-
- for i in range(0, len(transform_getters)):
- if len(keys) == 0:
- assert i == 0, f"Function {transform_getters[i]} did not yield any loaded keys."
- # initialize
- transform, keys = transform_getters[0](hparams, *initial_args)
- else:
- transform, keys = transform_getters[i](hparams, keys)
- transforms.append(transform)
-
- return Compose(transforms), keys
-
-def _get_config_transform_by_name(transform_name: str) -> Callable:
- if transform_name == "intensity":
- return intensity_transform
- elif transform_name.startswith("spatial3d"):
- if "simple" in transform_name:
- return lambda hparams, loaded_keys: spatial_transform(hparams, loaded_keys, mode='simple')
- else:
- return lambda hparams, loaded_keys: spatial_transform(hparams, loaded_keys, mode='default')
- elif transform_name == "modelsgenesis":
- return models_genesis_transform
- elif transform_name == "pretrained_resnet":
- return pretrained_resnet_transform
- elif transform_name == "robustness":
- return robustness_transform
- else:
- raise ValueError(f"Unknown transform: {transform_name}")
-
-def get_training_transforms(hparams, image_dir: Path, mask_dir: Optional[Path] = None) -> Compose:
- transforms_base = [get_image_loading_transform, get_mask_transform]
-
- # robustness has to run early as we may need to operate on the whole volume for affine transformation and padding,
- # which must occur prior to any cropping or normalization
- if "robustness" in hparams.transforms: transforms_base.append(_get_config_transform_by_name("robustness"))
-
- transforms_base.extend([get_apply_crop_transform, get_normalized_coordinates_transform])
-
- # preprocessing transforms must be run first
- preprocessing_transforms = ["modelsgenesis", "pretrained_resnet"]
- config_transforms = [_get_config_transform_by_name(transform_name) for transform_name in hparams.transforms if transform_name in preprocessing_transforms]
-
- # then append the rest minus the robustness transform that is run earlier
- exclusion_criterion = lambda transform_name: transform_name in preprocessing_transforms or transform_name == "robustness"
- config_transforms.extend([_get_config_transform_by_name(transform_name) for transform_name in hparams.transforms if not exclusion_criterion])
-
- # the stacking transform must not occur before config transforms are run to avoid any interference
- return _build_transforms_composition(hparams, transforms_base + config_transforms + [get_stacking_transform], image_dir, mask_dir)[0]
-
-def get_base_transforms(hparams, image_dir: Path, mask_dir: Optional[Path] = None) -> Compose:
- transforms_base = [get_image_loading_transform, get_mask_transform, get_apply_crop_transform, get_normalized_coordinates_transform]
-
- # apply preprocessing transforms
- preprocessing_transforms = ["modelsgenesis", "pretrained_resnet"]
- config_transforms = [_get_config_transform_by_name(transform_name) for transform_name in hparams.transforms if transform_name in preprocessing_transforms]
-
- return _build_transforms_composition(hparams, transforms_base + config_transforms + [get_stacking_transform], image_dir, mask_dir)[0]
\ No newline at end of file
diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/distributions/base.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/distributions/base.py
deleted file mode 100644
index 75ce2dc9057a20a957abe2fbd4ef094dc4196684..0000000000000000000000000000000000000000
--- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/distributions/base.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import abc
-
-from pip._internal.index.package_finder import PackageFinder
-from pip._internal.metadata.base import BaseDistribution
-from pip._internal.req import InstallRequirement
-
-
-class AbstractDistribution(metaclass=abc.ABCMeta):
- """A base class for handling installable artifacts.
-
- The requirements for anything installable are as follows:
-
- - we must be able to determine the requirement name
- (or we can't correctly handle the non-upgrade case).
-
- - for packages with setup requirements, we must also be able
- to determine their requirements without installing additional
- packages (for the same reason as run-time dependencies)
-
- - we must be able to create a Distribution object exposing the
- above metadata.
- """
-
- def __init__(self, req: InstallRequirement) -> None:
- super().__init__()
- self.req = req
-
- @abc.abstractmethod
- def get_metadata_distribution(self) -> BaseDistribution:
- raise NotImplementedError()
-
- @abc.abstractmethod
- def prepare_distribution_metadata(
- self,
- finder: PackageFinder,
- build_isolation: bool,
- check_build_deps: bool,
- ) -> None:
- raise NotImplementedError()
diff --git a/spaces/posak/Tune-A-Video-Training-UI/app.py b/spaces/posak/Tune-A-Video-Training-UI/app.py
deleted file mode 100644
index 3e0b9a282fc42c71e6c0f8d7f238a79a9c53c697..0000000000000000000000000000000000000000
--- a/spaces/posak/Tune-A-Video-Training-UI/app.py
+++ /dev/null
@@ -1,84 +0,0 @@
-#!/usr/bin/env python
-
-from __future__ import annotations
-
-import os
-from subprocess import getoutput
-
-import gradio as gr
-import torch
-
-from app_inference import create_inference_demo
-from app_training import create_training_demo
-from app_upload import create_upload_demo
-from inference import InferencePipeline
-from trainer import Trainer
-
-TITLE = '# [Tune-A-Video](https://tuneavideo.github.io/) UI'
-
-ORIGINAL_SPACE_ID = 'Tune-A-Video-library/Tune-A-Video-Training-UI'
-SPACE_ID = os.getenv('SPACE_ID', ORIGINAL_SPACE_ID)
-GPU_DATA = getoutput('nvidia-smi')
-SHARED_UI_WARNING = f'''## Attention - Training doesn't work in this shared UI. You can duplicate and use it with a paid private T4 GPU.
-
-
-'''
-
-if os.getenv('SYSTEM') == 'spaces' and SPACE_ID != ORIGINAL_SPACE_ID:
- SETTINGS = f'Settings'
-else:
- SETTINGS = 'Settings'
-
-INVALID_GPU_WARNING = f'''## Attention - the specified GPU is invalid. Training may not work. Make sure you have selected a `T4 GPU` for this task.'''
-
-CUDA_NOT_AVAILABLE_WARNING = f'''## Attention - Running on CPU.
-
-You can assign a GPU in the {SETTINGS} tab if you are running this on HF Spaces.
-You can use "T4 small/medium" to run this demo.
-
-'''
-
-HF_TOKEN_NOT_SPECIFIED_WARNING = f'''The environment variable `HF_TOKEN` is not specified. Feel free to specify your Hugging Face token with write permission if you don't want to manually provide it for every run.
-
-You can check and create your Hugging Face tokens here.
-You can specify environment variables in the "Repository secrets" section of the {SETTINGS} tab.
-
-'''
-
-HF_TOKEN = os.getenv('HF_TOKEN')
-
-
-def show_warning(warning_text: str) -> gr.Blocks:
- with gr.Blocks() as demo:
- with gr.Box():
- gr.Markdown(warning_text)
- return demo
-
-
-pipe = InferencePipeline(HF_TOKEN)
-trainer = Trainer(HF_TOKEN)
-
-with gr.Blocks(css='style.css') as demo:
- if SPACE_ID == ORIGINAL_SPACE_ID:
- show_warning(SHARED_UI_WARNING)
- elif not torch.cuda.is_available():
- show_warning(CUDA_NOT_AVAILABLE_WARNING)
- elif (not 'T4' in GPU_DATA):
- show_warning(INVALID_GPU_WARNING)
-
- gr.Markdown(TITLE)
- with gr.Tabs():
- with gr.TabItem('Train'):
- create_training_demo(trainer, pipe)
- with gr.TabItem('Run'):
- create_inference_demo(pipe, HF_TOKEN)
- with gr.TabItem('Upload'):
- gr.Markdown('''
- - You can use this tab to upload models later if you choose not to upload models in training time or if upload in training time failed.
- ''')
- create_upload_demo(HF_TOKEN)
-
- if not HF_TOKEN:
- show_warning(HF_TOKEN_NOT_SPECIFIED_WARNING)
-
-demo.queue(max_size=1).launch(share=False)
diff --git a/spaces/prerna9811/Chord/portaudio/include/pa_mac_core.h b/spaces/prerna9811/Chord/portaudio/include/pa_mac_core.h
deleted file mode 100644
index beb539619a19e5025b6874fc5cbdc0b0b704557b..0000000000000000000000000000000000000000
--- a/spaces/prerna9811/Chord/portaudio/include/pa_mac_core.h
+++ /dev/null
@@ -1,191 +0,0 @@
-#ifndef PA_MAC_CORE_H
-#define PA_MAC_CORE_H
-/*
- * PortAudio Portable Real-Time Audio Library
- * Macintosh Core Audio specific extensions
- * portaudio.h should be included before this file.
- *
- * Copyright (c) 2005-2006 Bjorn Roche
- *
- * Permission is hereby granted, free of charge, to any person obtaining
- * a copy of this software and associated documentation files
- * (the "Software"), to deal in the Software without restriction,
- * including without limitation the rights to use, copy, modify, merge,
- * publish, distribute, sublicense, and/or sell copies of the Software,
- * and to permit persons to whom the Software is furnished to do so,
- * subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be
- * included in all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR
- * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
- * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
- * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
- */
-
-/*
- * The text above constitutes the entire PortAudio license; however,
- * the PortAudio community also makes the following non-binding requests:
- *
- * Any person wishing to distribute modifications to the Software is
- * requested to send the modifications to the original developer so that
- * they can be incorporated into the canonical version. It is also
- * requested that these non-binding requests be included along with the
- * license above.
- */
-
-/** @file
- * @ingroup public_header
- * @brief CoreAudio-specific PortAudio API extension header file.
- */
-
-#include "portaudio.h"
-
-#include
-#include
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-
-/**
- * A pointer to a paMacCoreStreamInfo may be passed as
- * the hostApiSpecificStreamInfo in the PaStreamParameters struct
- * when opening a stream or querying the format. Use NULL, for the
- * defaults. Note that for duplex streams, flags for input and output
- * should be the same or behaviour is undefined.
- */
-typedef struct
-{
- unsigned long size; /**size of whole structure including this header */
- PaHostApiTypeId hostApiType; /**host API for which this data is intended */
- unsigned long version; /**structure version */
- unsigned long flags; /** flags to modify behaviour */
- SInt32 const * channelMap; /** Channel map for HAL channel mapping , if not needed, use NULL;*/
- unsigned long channelMapSize; /** Channel map size for HAL channel mapping , if not needed, use 0;*/
-} PaMacCoreStreamInfo;
-
-/**
- * Functions
- */
-
-
-/** Use this function to initialize a paMacCoreStreamInfo struct
- * using the requested flags. Note that channel mapping is turned
- * off after a call to this function.
- * @param data The datastructure to initialize
- * @param flags The flags to initialize the datastructure with.
-*/
-void PaMacCore_SetupStreamInfo( PaMacCoreStreamInfo *data, unsigned long flags );
-
-/** call this after pa_SetupMacCoreStreamInfo to use channel mapping as described in notes.txt.
- * @param data The stream info structure to assign a channel mapping to
- * @param channelMap The channel map array, as described in notes.txt. This array pointer will be used directly (ie the underlying data will not be copied), so the caller should not free the array until after the stream has been opened.
- * @param channelMapSize The size of the channel map array.
- */
-void PaMacCore_SetupChannelMap( PaMacCoreStreamInfo *data, const SInt32 * const channelMap, unsigned long channelMapSize );
-
-/**
- * Retrieve the AudioDeviceID of the input device assigned to an open stream
- *
- * @param s The stream to query.
- *
- * @return A valid AudioDeviceID, or NULL if an error occurred.
- */
-AudioDeviceID PaMacCore_GetStreamInputDevice( PaStream* s );
-
-/**
- * Retrieve the AudioDeviceID of the output device assigned to an open stream
- *
- * @param s The stream to query.
- *
- * @return A valid AudioDeviceID, or NULL if an error occurred.
- */
-AudioDeviceID PaMacCore_GetStreamOutputDevice( PaStream* s );
-
-/**
- * Returns a statically allocated string with the device's name
- * for the given channel. NULL will be returned on failure.
- *
- * This function's implementation is not complete!
- *
- * @param device The PortAudio device index.
- * @param channel The channel number who's name is requested.
- * @return a statically allocated string with the name of the device.
- * Because this string is statically allocated, it must be
- * copied if it is to be saved and used by the user after
- * another call to this function.
- *
- */
-const char *PaMacCore_GetChannelName( int device, int channelIndex, bool input );
-
-
-/** Retrieve the range of legal native buffer sizes for the specified device, in sample frames.
-
- @param device The global index of the PortAudio device about which the query is being made.
- @param minBufferSizeFrames A pointer to the location which will receive the minimum buffer size value.
- @param maxBufferSizeFrames A pointer to the location which will receive the maximum buffer size value.
-
- @see kAudioDevicePropertyBufferFrameSizeRange in the CoreAudio SDK.
- */
-PaError PaMacCore_GetBufferSizeRange( PaDeviceIndex device,
- long *minBufferSizeFrames, long *maxBufferSizeFrames );
-
-
-/**
- * Flags
- */
-
-/**
- * The following flags alter the behaviour of PA on the mac platform.
- * they can be ORed together. These should work both for opening and
- * checking a device.
- */
-
-/** Allows PortAudio to change things like the device's frame size,
- * which allows for much lower latency, but might disrupt the device
- * if other programs are using it, even when you are just Querying
- * the device. */
-#define paMacCoreChangeDeviceParameters (0x01)
-
-/** In combination with the above flag,
- * causes the stream opening to fail, unless the exact sample rates
- * are supported by the device. */
-#define paMacCoreFailIfConversionRequired (0x02)
-
-/** These flags set the SR conversion quality, if required. The weird ordering
- * allows Maximum Quality to be the default.*/
-#define paMacCoreConversionQualityMin (0x0100)
-#define paMacCoreConversionQualityMedium (0x0200)
-#define paMacCoreConversionQualityLow (0x0300)
-#define paMacCoreConversionQualityHigh (0x0400)
-#define paMacCoreConversionQualityMax (0x0000)
-
-/**
- * Here are some "preset" combinations of flags (above) to get to some
- * common configurations. THIS IS OVERKILL, but if more flags are added
- * it won't be.
- */
-
-/**This is the default setting: do as much sample rate conversion as possible
- * and as little mucking with the device as possible. */
-#define paMacCorePlayNice (0x00)
-/**This setting is tuned for pro audio apps. It allows SR conversion on input
- and output, but it tries to set the appropriate SR on the device.*/
-#define paMacCorePro (0x01)
-/**This is a setting to minimize CPU usage and still play nice.*/
-#define paMacCoreMinimizeCPUButPlayNice (0x0100)
-/**This is a setting to minimize CPU usage, even if that means interrupting the device. */
-#define paMacCoreMinimizeCPU (0x0101)
-
-
-#ifdef __cplusplus
-}
-#endif /** __cplusplus */
-
-#endif /** PA_MAC_CORE_H */
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-3a71f692.css b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-3a71f692.css
deleted file mode 100644
index 7405bef579f275474ef94178dfeb94598d6cfe96..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-3a71f692.css
+++ /dev/null
@@ -1 +0,0 @@
-.settings-wrapper.svelte-k0z87h.svelte-k0z87h{display:flex;justify-self:self-end}.text-button.svelte-k0z87h.svelte-k0z87h{border:1px solid var(--neutral-400);border-radius:var(--radius-sm);font-weight:300;font-size:var(--size-3);text-align:center;color:var(--neutral-400);height:var(--size-5);font-weight:700;padding:0 5px;margin-left:5px}.text-button.svelte-k0z87h.svelte-k0z87h:hover,.text-button.svelte-k0z87h.svelte-k0z87h:focus{color:var(--color-accent);border-color:var(--color-accent)}.controls.svelte-k0z87h.svelte-k0z87h{display:grid;grid-template-columns:1fr 1fr 1fr;margin-top:5px;overflow:hidden;align-items:center}@media (max-width: 320px){.controls.svelte-k0z87h.svelte-k0z87h{display:flex;flex-wrap:wrap}.controls.svelte-k0z87h .svelte-k0z87h{margin:var(--spacing-sm)}.controls.svelte-k0z87h .text-button.svelte-k0z87h{margin-left:0}}.action.svelte-k0z87h.svelte-k0z87h{width:var(--size-5);color:var(--neutral-400);margin-left:var(--spacing-md)}.icon.svelte-k0z87h.svelte-k0z87h:hover,.icon.svelte-k0z87h.svelte-k0z87h:focus{color:var(--color-accent)}.play-pause-wrapper.svelte-k0z87h.svelte-k0z87h{display:flex;justify-self:center}.playback.svelte-k0z87h.svelte-k0z87h{border:1px solid var(--neutral-400);border-radius:var(--radius-sm);width:5.5ch;font-weight:300;font-size:var(--size-3);text-align:center;color:var(--neutral-400);height:var(--size-5);font-weight:700}.playback.svelte-k0z87h.svelte-k0z87h:hover{color:var(--color-accent);border-color:var(--color-accent)}.rewind.svelte-k0z87h.svelte-k0z87h,.skip.svelte-k0z87h.svelte-k0z87h{margin:0 10px;color:var(--neutral-400)}.play-pause-button.svelte-k0z87h.svelte-k0z87h{width:var(--size-8);display:flex;align-items:center;justify-content:center;color:var(--neutral-400);fill:var(--neutral-400)}.component-wrapper.svelte-15pl8d9{padding:var(--size-3)}.timestamps.svelte-15pl8d9{display:flex;justify-content:space-between;align-items:center;width:100%;padding:var(--size-1) 0}#time.svelte-15pl8d9,#duration.svelte-15pl8d9{color:var(--neutral-400)}#trim-duration.svelte-15pl8d9{color:var(--color-accent);margin-right:var(--spacing-sm)}.waveform-container.svelte-15pl8d9{display:flex;align-items:center;justify-content:center;width:var(--size-full)}#waveform.svelte-15pl8d9{width:100%;height:100%;position:relative}.icon-buttons.svelte-rvdo70{display:flex;position:absolute;top:6px;right:6px;gap:var(--size-1)}#mic-select.svelte-pjb0ac.svelte-pjb0ac{height:var(--size-8);background:var(--block-background-fill);padding:0px var(--spacing-xxl);border-radius:var(--radius-full);font-size:var(--text-md);border:1px solid var(--neutral-400)}.controls.svelte-pjb0ac.svelte-pjb0ac{display:flex;align-items:center;justify-content:space-between;flex-wrap:wrap;overflow:hidden}.controls.svelte-pjb0ac select.svelte-pjb0ac{text-overflow:ellipsis;margin:var(--size-2) 0}@media (max-width: 375px){.controls.svelte-pjb0ac select.svelte-pjb0ac{width:100%}}.wrapper.svelte-pjb0ac.svelte-pjb0ac{display:flex;align-items:center;justify-content:center}#record.svelte-pjb0ac.svelte-pjb0ac{margin-right:var(--spacing-md)}.stop-button-paused.svelte-pjb0ac.svelte-pjb0ac{display:none;height:var(--size-8);width:var(--size-20);background-color:var(--block-background-fill);border-radius:var(--radius-3xl);align-items:center;border:1px solid var(--neutral-400);margin-right:5px}.stop-button-paused.svelte-pjb0ac.svelte-pjb0ac:before{content:"";height:var(--size-4);width:var(--size-4);border-radius:var(--radius-full);background:var(--primary-600);margin:0 var(--spacing-xl)}.stop-button.svelte-pjb0ac.svelte-pjb0ac:before{content:"";height:var(--size-4);width:var(--size-4);border-radius:var(--radius-full);background:var(--primary-600);margin:0 var(--spacing-xl);animation:svelte-pjb0ac-scaling 1.8s infinite}.stop-button.svelte-pjb0ac.svelte-pjb0ac{display:none;height:var(--size-8);width:var(--size-20);background-color:var(--block-background-fill);border-radius:var(--radius-3xl);align-items:center;border:1px solid var(--primary-600);margin-right:5px}.record-button.svelte-pjb0ac.svelte-pjb0ac:before{content:"";height:var(--size-4);width:var(--size-4);border-radius:var(--radius-full);background:var(--primary-600);margin:0 var(--spacing-xl)}.record-button.svelte-pjb0ac.svelte-pjb0ac{height:var(--size-8);width:var(--size-24);background-color:var(--block-background-fill);border-radius:var(--radius-3xl);display:flex;align-items:center;border:1px solid var(--neutral-400)}.stop-button.svelte-pjb0ac.svelte-pjb0ac:disabled{cursor:not-allowed}.record-button.svelte-pjb0ac.svelte-pjb0ac:disabled{cursor:not-allowed;opacity:.5}@keyframes svelte-pjb0ac-scaling{0%{background-color:var(--primary-600);scale:1}50%{background-color:var(--primary-600);scale:1.2}to{background-color:var(--primary-600);scale:1}}.pause-button.svelte-pjb0ac.svelte-pjb0ac{display:none;height:var(--size-8);width:var(--size-20);border:1px solid var(--neutral-400);border-radius:var(--radius-3xl);padding:var(--spacing-md)}.resume-button.svelte-pjb0ac.svelte-pjb0ac{display:none;height:var(--size-8);width:var(--size-20);border:1px solid var(--neutral-400);border-radius:var(--radius-3xl);padding:var(--spacing-xl);line-height:1px;font-size:var(--text-md)}::part(region){border-radius:var(--radius-md);height:98%!important;border:1px solid var(--color-accent);border-width:1px 3px}::part(region-handle){width:5px!important;border:none}#microphone.svelte-imtedr{width:100%;display:none}.component-wrapper.svelte-imtedr{padding:var(--size-3)}#timestamps.svelte-imtedr{display:flex;justify-content:space-between;align-items:center;width:100%;padding:var(--size-1) 0;margin:var(--spacing-md) 0}#time.svelte-imtedr,#duration.svelte-imtedr{color:var(--neutral-400)}#trim-duration.svelte-imtedr{color:var(--color-accent);margin-right:var(--spacing-sm)}.mic-wrap.svelte-16e5vwh{display:block;align-items:center;margin:var(--spacing-xl)}.stop-button-paused.svelte-16e5vwh{display:none;height:var(--size-8);width:var(--size-20);background-color:var(--block-background-fill);border-radius:var(--radius-3xl);align-items:center;border:1px solid var(--neutral-400);margin-right:5px}.stop-button-paused.svelte-16e5vwh:before{content:"";height:var(--size-4);width:var(--size-4);border-radius:var(--radius-full);background:var(--primary-600);margin:0 var(--spacing-xl)}.stop-button.svelte-16e5vwh:before{content:"";height:var(--size-4);width:var(--size-4);border-radius:var(--radius-full);background:var(--primary-600);margin:0 var(--spacing-xl);animation:scaling 1.8s infinite}.stop-button.svelte-16e5vwh{height:var(--size-8);width:var(--size-20);background-color:var(--block-background-fill);border-radius:var(--radius-3xl);align-items:center;border:1px solid var(--primary-600);margin-right:5px;display:flex}.record-button.svelte-16e5vwh:before{content:"";height:var(--size-4);width:var(--size-4);border-radius:var(--radius-full);background:var(--primary-600);margin:0 var(--spacing-xl)}.record-button.svelte-16e5vwh{height:var(--size-8);width:var(--size-24);background-color:var(--block-background-fill);border-radius:var(--radius-3xl);display:flex;align-items:center;border:1px solid var(--neutral-400)}.source-selection.svelte-10shjqk{display:flex;align-items:center;justify-content:center;border-top:1px solid var(--border-color-primary);width:95%;margin:0 auto}.icon.svelte-10shjqk{width:22px;height:22px;margin:var(--spacing-lg) var(--spacing-xs);padding:var(--spacing-xs);color:var(--neutral-400);border-radius:var(--radius-md)}.icon.svelte-10shjqk:hover,.icon.svelte-10shjqk:focus{color:var(--color-accent)}
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/importlib_resources/tests/util.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/importlib_resources/tests/util.py
deleted file mode 100644
index 066f41130d3b4b5fcc9aa22091e382f516b136aa..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/importlib_resources/tests/util.py
+++ /dev/null
@@ -1,165 +0,0 @@
-import abc
-import importlib
-import io
-import sys
-import types
-import pathlib
-import contextlib
-
-from . import data01
-from ..abc import ResourceReader
-from ._compat import import_helper, os_helper
-from . import zip as zip_
-
-
-from importlib.machinery import ModuleSpec
-
-
-class Reader(ResourceReader):
- def __init__(self, **kwargs):
- vars(self).update(kwargs)
-
- def get_resource_reader(self, package):
- return self
-
- def open_resource(self, path):
- self._path = path
- if isinstance(self.file, Exception):
- raise self.file
- return self.file
-
- def resource_path(self, path_):
- self._path = path_
- if isinstance(self.path, Exception):
- raise self.path
- return self.path
-
- def is_resource(self, path_):
- self._path = path_
- if isinstance(self.path, Exception):
- raise self.path
-
- def part(entry):
- return entry.split('/')
-
- return any(
- len(parts) == 1 and parts[0] == path_ for parts in map(part, self._contents)
- )
-
- def contents(self):
- if isinstance(self.path, Exception):
- raise self.path
- yield from self._contents
-
-
-def create_package_from_loader(loader, is_package=True):
- name = 'testingpackage'
- module = types.ModuleType(name)
- spec = ModuleSpec(name, loader, origin='does-not-exist', is_package=is_package)
- module.__spec__ = spec
- module.__loader__ = loader
- return module
-
-
-def create_package(file=None, path=None, is_package=True, contents=()):
- return create_package_from_loader(
- Reader(file=file, path=path, _contents=contents),
- is_package,
- )
-
-
-class CommonTests(metaclass=abc.ABCMeta):
- """
- Tests shared by test_open, test_path, and test_read.
- """
-
- @abc.abstractmethod
- def execute(self, package, path):
- """
- Call the pertinent legacy API function (e.g. open_text, path)
- on package and path.
- """
-
- def test_package_name(self):
- """
- Passing in the package name should succeed.
- """
- self.execute(data01.__name__, 'utf-8.file')
-
- def test_package_object(self):
- """
- Passing in the package itself should succeed.
- """
- self.execute(data01, 'utf-8.file')
-
- def test_string_path(self):
- """
- Passing in a string for the path should succeed.
- """
- path = 'utf-8.file'
- self.execute(data01, path)
-
- def test_pathlib_path(self):
- """
- Passing in a pathlib.PurePath object for the path should succeed.
- """
- path = pathlib.PurePath('utf-8.file')
- self.execute(data01, path)
-
- def test_importing_module_as_side_effect(self):
- """
- The anchor package can already be imported.
- """
- del sys.modules[data01.__name__]
- self.execute(data01.__name__, 'utf-8.file')
-
- def test_missing_path(self):
- """
- Attempting to open or read or request the path for a
- non-existent path should succeed if open_resource
- can return a viable data stream.
- """
- bytes_data = io.BytesIO(b'Hello, world!')
- package = create_package(file=bytes_data, path=FileNotFoundError())
- self.execute(package, 'utf-8.file')
- self.assertEqual(package.__loader__._path, 'utf-8.file')
-
- def test_extant_path(self):
- # Attempting to open or read or request the path when the
- # path does exist should still succeed. Does not assert
- # anything about the result.
- bytes_data = io.BytesIO(b'Hello, world!')
- # any path that exists
- path = __file__
- package = create_package(file=bytes_data, path=path)
- self.execute(package, 'utf-8.file')
- self.assertEqual(package.__loader__._path, 'utf-8.file')
-
- def test_useless_loader(self):
- package = create_package(file=FileNotFoundError(), path=FileNotFoundError())
- with self.assertRaises(FileNotFoundError):
- self.execute(package, 'utf-8.file')
-
-
-class ZipSetupBase:
- ZIP_MODULE = 'data01'
-
- def setUp(self):
- self.fixtures = contextlib.ExitStack()
- self.addCleanup(self.fixtures.close)
-
- modules = import_helper.modules_setup()
- self.addCleanup(import_helper.modules_cleanup, *modules)
-
- temp_dir = self.fixtures.enter_context(os_helper.temp_dir())
- modules = pathlib.Path(temp_dir) / 'zipped modules.zip'
- src_path = pathlib.Path(__file__).parent.joinpath(self.ZIP_MODULE)
- self.fixtures.enter_context(
- import_helper.DirsOnSysPath(str(zip_.make_zip_file(src_path, modules)))
- )
-
- self.data = importlib.import_module(self.ZIP_MODULE)
-
-
-class ZipSetup(ZipSetupBase):
- pass
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/npy_pkg_config.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/npy_pkg_config.py
deleted file mode 100644
index f6e3ad3974ca63115e1f8124e743235bb300f1a1..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/npy_pkg_config.py
+++ /dev/null
@@ -1,437 +0,0 @@
-import sys
-import re
-import os
-
-from configparser import RawConfigParser
-
-__all__ = ['FormatError', 'PkgNotFound', 'LibraryInfo', 'VariableSet',
- 'read_config', 'parse_flags']
-
-_VAR = re.compile(r'\$\{([a-zA-Z0-9_-]+)\}')
-
-class FormatError(OSError):
- """
- Exception thrown when there is a problem parsing a configuration file.
-
- """
- def __init__(self, msg):
- self.msg = msg
-
- def __str__(self):
- return self.msg
-
-class PkgNotFound(OSError):
- """Exception raised when a package can not be located."""
- def __init__(self, msg):
- self.msg = msg
-
- def __str__(self):
- return self.msg
-
-def parse_flags(line):
- """
- Parse a line from a config file containing compile flags.
-
- Parameters
- ----------
- line : str
- A single line containing one or more compile flags.
-
- Returns
- -------
- d : dict
- Dictionary of parsed flags, split into relevant categories.
- These categories are the keys of `d`:
-
- * 'include_dirs'
- * 'library_dirs'
- * 'libraries'
- * 'macros'
- * 'ignored'
-
- """
- d = {'include_dirs': [], 'library_dirs': [], 'libraries': [],
- 'macros': [], 'ignored': []}
-
- flags = (' ' + line).split(' -')
- for flag in flags:
- flag = '-' + flag
- if len(flag) > 0:
- if flag.startswith('-I'):
- d['include_dirs'].append(flag[2:].strip())
- elif flag.startswith('-L'):
- d['library_dirs'].append(flag[2:].strip())
- elif flag.startswith('-l'):
- d['libraries'].append(flag[2:].strip())
- elif flag.startswith('-D'):
- d['macros'].append(flag[2:].strip())
- else:
- d['ignored'].append(flag)
-
- return d
-
-def _escape_backslash(val):
- return val.replace('\\', '\\\\')
-
-class LibraryInfo:
- """
- Object containing build information about a library.
-
- Parameters
- ----------
- name : str
- The library name.
- description : str
- Description of the library.
- version : str
- Version string.
- sections : dict
- The sections of the configuration file for the library. The keys are
- the section headers, the values the text under each header.
- vars : class instance
- A `VariableSet` instance, which contains ``(name, value)`` pairs for
- variables defined in the configuration file for the library.
- requires : sequence, optional
- The required libraries for the library to be installed.
-
- Notes
- -----
- All input parameters (except "sections" which is a method) are available as
- attributes of the same name.
-
- """
- def __init__(self, name, description, version, sections, vars, requires=None):
- self.name = name
- self.description = description
- if requires:
- self.requires = requires
- else:
- self.requires = []
- self.version = version
- self._sections = sections
- self.vars = vars
-
- def sections(self):
- """
- Return the section headers of the config file.
-
- Parameters
- ----------
- None
-
- Returns
- -------
- keys : list of str
- The list of section headers.
-
- """
- return list(self._sections.keys())
-
- def cflags(self, section="default"):
- val = self.vars.interpolate(self._sections[section]['cflags'])
- return _escape_backslash(val)
-
- def libs(self, section="default"):
- val = self.vars.interpolate(self._sections[section]['libs'])
- return _escape_backslash(val)
-
- def __str__(self):
- m = ['Name: %s' % self.name, 'Description: %s' % self.description]
- if self.requires:
- m.append('Requires:')
- else:
- m.append('Requires: %s' % ",".join(self.requires))
- m.append('Version: %s' % self.version)
-
- return "\n".join(m)
-
-class VariableSet:
- """
- Container object for the variables defined in a config file.
-
- `VariableSet` can be used as a plain dictionary, with the variable names
- as keys.
-
- Parameters
- ----------
- d : dict
- Dict of items in the "variables" section of the configuration file.
-
- """
- def __init__(self, d):
- self._raw_data = dict([(k, v) for k, v in d.items()])
-
- self._re = {}
- self._re_sub = {}
-
- self._init_parse()
-
- def _init_parse(self):
- for k, v in self._raw_data.items():
- self._init_parse_var(k, v)
-
- def _init_parse_var(self, name, value):
- self._re[name] = re.compile(r'\$\{%s\}' % name)
- self._re_sub[name] = value
-
- def interpolate(self, value):
- # Brute force: we keep interpolating until there is no '${var}' anymore
- # or until interpolated string is equal to input string
- def _interpolate(value):
- for k in self._re.keys():
- value = self._re[k].sub(self._re_sub[k], value)
- return value
- while _VAR.search(value):
- nvalue = _interpolate(value)
- if nvalue == value:
- break
- value = nvalue
-
- return value
-
- def variables(self):
- """
- Return the list of variable names.
-
- Parameters
- ----------
- None
-
- Returns
- -------
- names : list of str
- The names of all variables in the `VariableSet` instance.
-
- """
- return list(self._raw_data.keys())
-
- # Emulate a dict to set/get variables values
- def __getitem__(self, name):
- return self._raw_data[name]
-
- def __setitem__(self, name, value):
- self._raw_data[name] = value
- self._init_parse_var(name, value)
-
-def parse_meta(config):
- if not config.has_section('meta'):
- raise FormatError("No meta section found !")
-
- d = dict(config.items('meta'))
-
- for k in ['name', 'description', 'version']:
- if not k in d:
- raise FormatError("Option %s (section [meta]) is mandatory, "
- "but not found" % k)
-
- if not 'requires' in d:
- d['requires'] = []
-
- return d
-
-def parse_variables(config):
- if not config.has_section('variables'):
- raise FormatError("No variables section found !")
-
- d = {}
-
- for name, value in config.items("variables"):
- d[name] = value
-
- return VariableSet(d)
-
-def parse_sections(config):
- return meta_d, r
-
-def pkg_to_filename(pkg_name):
- return "%s.ini" % pkg_name
-
-def parse_config(filename, dirs=None):
- if dirs:
- filenames = [os.path.join(d, filename) for d in dirs]
- else:
- filenames = [filename]
-
- config = RawConfigParser()
-
- n = config.read(filenames)
- if not len(n) >= 1:
- raise PkgNotFound("Could not find file(s) %s" % str(filenames))
-
- # Parse meta and variables sections
- meta = parse_meta(config)
-
- vars = {}
- if config.has_section('variables'):
- for name, value in config.items("variables"):
- vars[name] = _escape_backslash(value)
-
- # Parse "normal" sections
- secs = [s for s in config.sections() if not s in ['meta', 'variables']]
- sections = {}
-
- requires = {}
- for s in secs:
- d = {}
- if config.has_option(s, "requires"):
- requires[s] = config.get(s, 'requires')
-
- for name, value in config.items(s):
- d[name] = value
- sections[s] = d
-
- return meta, vars, sections, requires
-
-def _read_config_imp(filenames, dirs=None):
- def _read_config(f):
- meta, vars, sections, reqs = parse_config(f, dirs)
- # recursively add sections and variables of required libraries
- for rname, rvalue in reqs.items():
- nmeta, nvars, nsections, nreqs = _read_config(pkg_to_filename(rvalue))
-
- # Update var dict for variables not in 'top' config file
- for k, v in nvars.items():
- if not k in vars:
- vars[k] = v
-
- # Update sec dict
- for oname, ovalue in nsections[rname].items():
- if ovalue:
- sections[rname][oname] += ' %s' % ovalue
-
- return meta, vars, sections, reqs
-
- meta, vars, sections, reqs = _read_config(filenames)
-
- # FIXME: document this. If pkgname is defined in the variables section, and
- # there is no pkgdir variable defined, pkgdir is automatically defined to
- # the path of pkgname. This requires the package to be imported to work
- if not 'pkgdir' in vars and "pkgname" in vars:
- pkgname = vars["pkgname"]
- if not pkgname in sys.modules:
- raise ValueError("You should import %s to get information on %s" %
- (pkgname, meta["name"]))
-
- mod = sys.modules[pkgname]
- vars["pkgdir"] = _escape_backslash(os.path.dirname(mod.__file__))
-
- return LibraryInfo(name=meta["name"], description=meta["description"],
- version=meta["version"], sections=sections, vars=VariableSet(vars))
-
-# Trivial cache to cache LibraryInfo instances creation. To be really
-# efficient, the cache should be handled in read_config, since a same file can
-# be parsed many time outside LibraryInfo creation, but I doubt this will be a
-# problem in practice
-_CACHE = {}
-def read_config(pkgname, dirs=None):
- """
- Return library info for a package from its configuration file.
-
- Parameters
- ----------
- pkgname : str
- Name of the package (should match the name of the .ini file, without
- the extension, e.g. foo for the file foo.ini).
- dirs : sequence, optional
- If given, should be a sequence of directories - usually including
- the NumPy base directory - where to look for npy-pkg-config files.
-
- Returns
- -------
- pkginfo : class instance
- The `LibraryInfo` instance containing the build information.
-
- Raises
- ------
- PkgNotFound
- If the package is not found.
-
- See Also
- --------
- misc_util.get_info, misc_util.get_pkg_info
-
- Examples
- --------
- >>> npymath_info = np.distutils.npy_pkg_config.read_config('npymath')
- >>> type(npymath_info)
-
- >>> print(npymath_info)
- Name: npymath
- Description: Portable, core math library implementing C99 standard
- Requires:
- Version: 0.1 #random
-
- """
- try:
- return _CACHE[pkgname]
- except KeyError:
- v = _read_config_imp(pkg_to_filename(pkgname), dirs)
- _CACHE[pkgname] = v
- return v
-
-# TODO:
-# - implements version comparison (modversion + atleast)
-
-# pkg-config simple emulator - useful for debugging, and maybe later to query
-# the system
-if __name__ == '__main__':
- from optparse import OptionParser
- import glob
-
- parser = OptionParser()
- parser.add_option("--cflags", dest="cflags", action="store_true",
- help="output all preprocessor and compiler flags")
- parser.add_option("--libs", dest="libs", action="store_true",
- help="output all linker flags")
- parser.add_option("--use-section", dest="section",
- help="use this section instead of default for options")
- parser.add_option("--version", dest="version", action="store_true",
- help="output version")
- parser.add_option("--atleast-version", dest="min_version",
- help="Minimal version")
- parser.add_option("--list-all", dest="list_all", action="store_true",
- help="Minimal version")
- parser.add_option("--define-variable", dest="define_variable",
- help="Replace variable with the given value")
-
- (options, args) = parser.parse_args(sys.argv)
-
- if len(args) < 2:
- raise ValueError("Expect package name on the command line:")
-
- if options.list_all:
- files = glob.glob("*.ini")
- for f in files:
- info = read_config(f)
- print("%s\t%s - %s" % (info.name, info.name, info.description))
-
- pkg_name = args[1]
- d = os.environ.get('NPY_PKG_CONFIG_PATH')
- if d:
- info = read_config(pkg_name, ['numpy/core/lib/npy-pkg-config', '.', d])
- else:
- info = read_config(pkg_name, ['numpy/core/lib/npy-pkg-config', '.'])
-
- if options.section:
- section = options.section
- else:
- section = "default"
-
- if options.define_variable:
- m = re.search(r'([\S]+)=([\S]+)', options.define_variable)
- if not m:
- raise ValueError("--define-variable option should be of "
- "the form --define-variable=foo=bar")
- else:
- name = m.group(1)
- value = m.group(2)
- info.vars[name] = value
-
- if options.cflags:
- print(info.cflags(section))
- if options.libs:
- print(info.libs(section))
- if options.version:
- print(info.version)
- if options.min_version:
- print(info.version >= options.min_version)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/tests/test_twodim_base.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/tests/test_twodim_base.py
deleted file mode 100644
index eb008c6002c86c94b180533230f849c909d10f39..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/tests/test_twodim_base.py
+++ /dev/null
@@ -1,541 +0,0 @@
-"""Test functions for matrix module
-
-"""
-from numpy.testing import (
- assert_equal, assert_array_equal, assert_array_max_ulp,
- assert_array_almost_equal, assert_raises, assert_
-)
-from numpy import (
- arange, add, fliplr, flipud, zeros, ones, eye, array, diag, histogram2d,
- tri, mask_indices, triu_indices, triu_indices_from, tril_indices,
- tril_indices_from, vander,
-)
-import numpy as np
-
-import pytest
-
-
-def get_mat(n):
- data = arange(n)
- data = add.outer(data, data)
- return data
-
-
-class TestEye:
- def test_basic(self):
- assert_equal(eye(4),
- array([[1, 0, 0, 0],
- [0, 1, 0, 0],
- [0, 0, 1, 0],
- [0, 0, 0, 1]]))
-
- assert_equal(eye(4, dtype='f'),
- array([[1, 0, 0, 0],
- [0, 1, 0, 0],
- [0, 0, 1, 0],
- [0, 0, 0, 1]], 'f'))
-
- assert_equal(eye(3) == 1,
- eye(3, dtype=bool))
-
- def test_uint64(self):
- # Regression test for gh-9982
- assert_equal(eye(np.uint64(2), dtype=int), array([[1, 0], [0, 1]]))
- assert_equal(eye(np.uint64(2), M=np.uint64(4), k=np.uint64(1)),
- array([[0, 1, 0, 0], [0, 0, 1, 0]]))
-
- def test_diag(self):
- assert_equal(eye(4, k=1),
- array([[0, 1, 0, 0],
- [0, 0, 1, 0],
- [0, 0, 0, 1],
- [0, 0, 0, 0]]))
-
- assert_equal(eye(4, k=-1),
- array([[0, 0, 0, 0],
- [1, 0, 0, 0],
- [0, 1, 0, 0],
- [0, 0, 1, 0]]))
-
- def test_2d(self):
- assert_equal(eye(4, 3),
- array([[1, 0, 0],
- [0, 1, 0],
- [0, 0, 1],
- [0, 0, 0]]))
-
- assert_equal(eye(3, 4),
- array([[1, 0, 0, 0],
- [0, 1, 0, 0],
- [0, 0, 1, 0]]))
-
- def test_diag2d(self):
- assert_equal(eye(3, 4, k=2),
- array([[0, 0, 1, 0],
- [0, 0, 0, 1],
- [0, 0, 0, 0]]))
-
- assert_equal(eye(4, 3, k=-2),
- array([[0, 0, 0],
- [0, 0, 0],
- [1, 0, 0],
- [0, 1, 0]]))
-
- def test_eye_bounds(self):
- assert_equal(eye(2, 2, 1), [[0, 1], [0, 0]])
- assert_equal(eye(2, 2, -1), [[0, 0], [1, 0]])
- assert_equal(eye(2, 2, 2), [[0, 0], [0, 0]])
- assert_equal(eye(2, 2, -2), [[0, 0], [0, 0]])
- assert_equal(eye(3, 2, 2), [[0, 0], [0, 0], [0, 0]])
- assert_equal(eye(3, 2, 1), [[0, 1], [0, 0], [0, 0]])
- assert_equal(eye(3, 2, -1), [[0, 0], [1, 0], [0, 1]])
- assert_equal(eye(3, 2, -2), [[0, 0], [0, 0], [1, 0]])
- assert_equal(eye(3, 2, -3), [[0, 0], [0, 0], [0, 0]])
-
- def test_strings(self):
- assert_equal(eye(2, 2, dtype='S3'),
- [[b'1', b''], [b'', b'1']])
-
- def test_bool(self):
- assert_equal(eye(2, 2, dtype=bool), [[True, False], [False, True]])
-
- def test_order(self):
- mat_c = eye(4, 3, k=-1)
- mat_f = eye(4, 3, k=-1, order='F')
- assert_equal(mat_c, mat_f)
- assert mat_c.flags.c_contiguous
- assert not mat_c.flags.f_contiguous
- assert not mat_f.flags.c_contiguous
- assert mat_f.flags.f_contiguous
-
-
-class TestDiag:
- def test_vector(self):
- vals = (100 * arange(5)).astype('l')
- b = zeros((5, 5))
- for k in range(5):
- b[k, k] = vals[k]
- assert_equal(diag(vals), b)
- b = zeros((7, 7))
- c = b.copy()
- for k in range(5):
- b[k, k + 2] = vals[k]
- c[k + 2, k] = vals[k]
- assert_equal(diag(vals, k=2), b)
- assert_equal(diag(vals, k=-2), c)
-
- def test_matrix(self, vals=None):
- if vals is None:
- vals = (100 * get_mat(5) + 1).astype('l')
- b = zeros((5,))
- for k in range(5):
- b[k] = vals[k, k]
- assert_equal(diag(vals), b)
- b = b * 0
- for k in range(3):
- b[k] = vals[k, k + 2]
- assert_equal(diag(vals, 2), b[:3])
- for k in range(3):
- b[k] = vals[k + 2, k]
- assert_equal(diag(vals, -2), b[:3])
-
- def test_fortran_order(self):
- vals = array((100 * get_mat(5) + 1), order='F', dtype='l')
- self.test_matrix(vals)
-
- def test_diag_bounds(self):
- A = [[1, 2], [3, 4], [5, 6]]
- assert_equal(diag(A, k=2), [])
- assert_equal(diag(A, k=1), [2])
- assert_equal(diag(A, k=0), [1, 4])
- assert_equal(diag(A, k=-1), [3, 6])
- assert_equal(diag(A, k=-2), [5])
- assert_equal(diag(A, k=-3), [])
-
- def test_failure(self):
- assert_raises(ValueError, diag, [[[1]]])
-
-
-class TestFliplr:
- def test_basic(self):
- assert_raises(ValueError, fliplr, ones(4))
- a = get_mat(4)
- b = a[:, ::-1]
- assert_equal(fliplr(a), b)
- a = [[0, 1, 2],
- [3, 4, 5]]
- b = [[2, 1, 0],
- [5, 4, 3]]
- assert_equal(fliplr(a), b)
-
-
-class TestFlipud:
- def test_basic(self):
- a = get_mat(4)
- b = a[::-1, :]
- assert_equal(flipud(a), b)
- a = [[0, 1, 2],
- [3, 4, 5]]
- b = [[3, 4, 5],
- [0, 1, 2]]
- assert_equal(flipud(a), b)
-
-
-class TestHistogram2d:
- def test_simple(self):
- x = array(
- [0.41702200, 0.72032449, 1.1437481e-4, 0.302332573, 0.146755891])
- y = array(
- [0.09233859, 0.18626021, 0.34556073, 0.39676747, 0.53881673])
- xedges = np.linspace(0, 1, 10)
- yedges = np.linspace(0, 1, 10)
- H = histogram2d(x, y, (xedges, yedges))[0]
- answer = array(
- [[0, 0, 0, 1, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 1, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 0],
- [1, 0, 1, 0, 0, 0, 0, 0, 0],
- [0, 1, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 0]])
- assert_array_equal(H.T, answer)
- H = histogram2d(x, y, xedges)[0]
- assert_array_equal(H.T, answer)
- H, xedges, yedges = histogram2d(list(range(10)), list(range(10)))
- assert_array_equal(H, eye(10, 10))
- assert_array_equal(xedges, np.linspace(0, 9, 11))
- assert_array_equal(yedges, np.linspace(0, 9, 11))
-
- def test_asym(self):
- x = array([1, 1, 2, 3, 4, 4, 4, 5])
- y = array([1, 3, 2, 0, 1, 2, 3, 4])
- H, xed, yed = histogram2d(
- x, y, (6, 5), range=[[0, 6], [0, 5]], density=True)
- answer = array(
- [[0., 0, 0, 0, 0],
- [0, 1, 0, 1, 0],
- [0, 0, 1, 0, 0],
- [1, 0, 0, 0, 0],
- [0, 1, 1, 1, 0],
- [0, 0, 0, 0, 1]])
- assert_array_almost_equal(H, answer/8., 3)
- assert_array_equal(xed, np.linspace(0, 6, 7))
- assert_array_equal(yed, np.linspace(0, 5, 6))
-
- def test_density(self):
- x = array([1, 2, 3, 1, 2, 3, 1, 2, 3])
- y = array([1, 1, 1, 2, 2, 2, 3, 3, 3])
- H, xed, yed = histogram2d(
- x, y, [[1, 2, 3, 5], [1, 2, 3, 5]], density=True)
- answer = array([[1, 1, .5],
- [1, 1, .5],
- [.5, .5, .25]])/9.
- assert_array_almost_equal(H, answer, 3)
-
- def test_all_outliers(self):
- r = np.random.rand(100) + 1. + 1e6 # histogramdd rounds by decimal=6
- H, xed, yed = histogram2d(r, r, (4, 5), range=([0, 1], [0, 1]))
- assert_array_equal(H, 0)
-
- def test_empty(self):
- a, edge1, edge2 = histogram2d([], [], bins=([0, 1], [0, 1]))
- assert_array_max_ulp(a, array([[0.]]))
-
- a, edge1, edge2 = histogram2d([], [], bins=4)
- assert_array_max_ulp(a, np.zeros((4, 4)))
-
- def test_binparameter_combination(self):
- x = array(
- [0, 0.09207008, 0.64575234, 0.12875982, 0.47390599,
- 0.59944483, 1])
- y = array(
- [0, 0.14344267, 0.48988575, 0.30558665, 0.44700682,
- 0.15886423, 1])
- edges = (0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1)
- H, xe, ye = histogram2d(x, y, (edges, 4))
- answer = array(
- [[2., 0., 0., 0.],
- [0., 1., 0., 0.],
- [0., 0., 0., 0.],
- [0., 0., 0., 0.],
- [0., 1., 0., 0.],
- [1., 0., 0., 0.],
- [0., 1., 0., 0.],
- [0., 0., 0., 0.],
- [0., 0., 0., 0.],
- [0., 0., 0., 1.]])
- assert_array_equal(H, answer)
- assert_array_equal(ye, array([0., 0.25, 0.5, 0.75, 1]))
- H, xe, ye = histogram2d(x, y, (4, edges))
- answer = array(
- [[1., 1., 0., 1., 0., 0., 0., 0., 0., 0.],
- [0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
- [0., 1., 0., 0., 1., 0., 0., 0., 0., 0.],
- [0., 0., 0., 0., 0., 0., 0., 0., 0., 1.]])
- assert_array_equal(H, answer)
- assert_array_equal(xe, array([0., 0.25, 0.5, 0.75, 1]))
-
- def test_dispatch(self):
- class ShouldDispatch:
- def __array_function__(self, function, types, args, kwargs):
- return types, args, kwargs
-
- xy = [1, 2]
- s_d = ShouldDispatch()
- r = histogram2d(s_d, xy)
- # Cannot use assert_equal since that dispatches...
- assert_(r == ((ShouldDispatch,), (s_d, xy), {}))
- r = histogram2d(xy, s_d)
- assert_(r == ((ShouldDispatch,), (xy, s_d), {}))
- r = histogram2d(xy, xy, bins=s_d)
- assert_(r, ((ShouldDispatch,), (xy, xy), dict(bins=s_d)))
- r = histogram2d(xy, xy, bins=[s_d, 5])
- assert_(r, ((ShouldDispatch,), (xy, xy), dict(bins=[s_d, 5])))
- assert_raises(Exception, histogram2d, xy, xy, bins=[s_d])
- r = histogram2d(xy, xy, weights=s_d)
- assert_(r, ((ShouldDispatch,), (xy, xy), dict(weights=s_d)))
-
- @pytest.mark.parametrize(("x_len", "y_len"), [(10, 11), (20, 19)])
- def test_bad_length(self, x_len, y_len):
- x, y = np.ones(x_len), np.ones(y_len)
- with pytest.raises(ValueError,
- match='x and y must have the same length.'):
- histogram2d(x, y)
-
-
-class TestTri:
- def test_dtype(self):
- out = array([[1, 0, 0],
- [1, 1, 0],
- [1, 1, 1]])
- assert_array_equal(tri(3), out)
- assert_array_equal(tri(3, dtype=bool), out.astype(bool))
-
-
-def test_tril_triu_ndim2():
- for dtype in np.typecodes['AllFloat'] + np.typecodes['AllInteger']:
- a = np.ones((2, 2), dtype=dtype)
- b = np.tril(a)
- c = np.triu(a)
- assert_array_equal(b, [[1, 0], [1, 1]])
- assert_array_equal(c, b.T)
- # should return the same dtype as the original array
- assert_equal(b.dtype, a.dtype)
- assert_equal(c.dtype, a.dtype)
-
-
-def test_tril_triu_ndim3():
- for dtype in np.typecodes['AllFloat'] + np.typecodes['AllInteger']:
- a = np.array([
- [[1, 1], [1, 1]],
- [[1, 1], [1, 0]],
- [[1, 1], [0, 0]],
- ], dtype=dtype)
- a_tril_desired = np.array([
- [[1, 0], [1, 1]],
- [[1, 0], [1, 0]],
- [[1, 0], [0, 0]],
- ], dtype=dtype)
- a_triu_desired = np.array([
- [[1, 1], [0, 1]],
- [[1, 1], [0, 0]],
- [[1, 1], [0, 0]],
- ], dtype=dtype)
- a_triu_observed = np.triu(a)
- a_tril_observed = np.tril(a)
- assert_array_equal(a_triu_observed, a_triu_desired)
- assert_array_equal(a_tril_observed, a_tril_desired)
- assert_equal(a_triu_observed.dtype, a.dtype)
- assert_equal(a_tril_observed.dtype, a.dtype)
-
-
-def test_tril_triu_with_inf():
- # Issue 4859
- arr = np.array([[1, 1, np.inf],
- [1, 1, 1],
- [np.inf, 1, 1]])
- out_tril = np.array([[1, 0, 0],
- [1, 1, 0],
- [np.inf, 1, 1]])
- out_triu = out_tril.T
- assert_array_equal(np.triu(arr), out_triu)
- assert_array_equal(np.tril(arr), out_tril)
-
-
-def test_tril_triu_dtype():
- # Issue 4916
- # tril and triu should return the same dtype as input
- for c in np.typecodes['All']:
- if c == 'V':
- continue
- arr = np.zeros((3, 3), dtype=c)
- assert_equal(np.triu(arr).dtype, arr.dtype)
- assert_equal(np.tril(arr).dtype, arr.dtype)
-
- # check special cases
- arr = np.array([['2001-01-01T12:00', '2002-02-03T13:56'],
- ['2004-01-01T12:00', '2003-01-03T13:45']],
- dtype='datetime64')
- assert_equal(np.triu(arr).dtype, arr.dtype)
- assert_equal(np.tril(arr).dtype, arr.dtype)
-
- arr = np.zeros((3, 3), dtype='f4,f4')
- assert_equal(np.triu(arr).dtype, arr.dtype)
- assert_equal(np.tril(arr).dtype, arr.dtype)
-
-
-def test_mask_indices():
- # simple test without offset
- iu = mask_indices(3, np.triu)
- a = np.arange(9).reshape(3, 3)
- assert_array_equal(a[iu], array([0, 1, 2, 4, 5, 8]))
- # Now with an offset
- iu1 = mask_indices(3, np.triu, 1)
- assert_array_equal(a[iu1], array([1, 2, 5]))
-
-
-def test_tril_indices():
- # indices without and with offset
- il1 = tril_indices(4)
- il2 = tril_indices(4, k=2)
- il3 = tril_indices(4, m=5)
- il4 = tril_indices(4, k=2, m=5)
-
- a = np.array([[1, 2, 3, 4],
- [5, 6, 7, 8],
- [9, 10, 11, 12],
- [13, 14, 15, 16]])
- b = np.arange(1, 21).reshape(4, 5)
-
- # indexing:
- assert_array_equal(a[il1],
- array([1, 5, 6, 9, 10, 11, 13, 14, 15, 16]))
- assert_array_equal(b[il3],
- array([1, 6, 7, 11, 12, 13, 16, 17, 18, 19]))
-
- # And for assigning values:
- a[il1] = -1
- assert_array_equal(a,
- array([[-1, 2, 3, 4],
- [-1, -1, 7, 8],
- [-1, -1, -1, 12],
- [-1, -1, -1, -1]]))
- b[il3] = -1
- assert_array_equal(b,
- array([[-1, 2, 3, 4, 5],
- [-1, -1, 8, 9, 10],
- [-1, -1, -1, 14, 15],
- [-1, -1, -1, -1, 20]]))
- # These cover almost the whole array (two diagonals right of the main one):
- a[il2] = -10
- assert_array_equal(a,
- array([[-10, -10, -10, 4],
- [-10, -10, -10, -10],
- [-10, -10, -10, -10],
- [-10, -10, -10, -10]]))
- b[il4] = -10
- assert_array_equal(b,
- array([[-10, -10, -10, 4, 5],
- [-10, -10, -10, -10, 10],
- [-10, -10, -10, -10, -10],
- [-10, -10, -10, -10, -10]]))
-
-
-class TestTriuIndices:
- def test_triu_indices(self):
- iu1 = triu_indices(4)
- iu2 = triu_indices(4, k=2)
- iu3 = triu_indices(4, m=5)
- iu4 = triu_indices(4, k=2, m=5)
-
- a = np.array([[1, 2, 3, 4],
- [5, 6, 7, 8],
- [9, 10, 11, 12],
- [13, 14, 15, 16]])
- b = np.arange(1, 21).reshape(4, 5)
-
- # Both for indexing:
- assert_array_equal(a[iu1],
- array([1, 2, 3, 4, 6, 7, 8, 11, 12, 16]))
- assert_array_equal(b[iu3],
- array([1, 2, 3, 4, 5, 7, 8, 9,
- 10, 13, 14, 15, 19, 20]))
-
- # And for assigning values:
- a[iu1] = -1
- assert_array_equal(a,
- array([[-1, -1, -1, -1],
- [5, -1, -1, -1],
- [9, 10, -1, -1],
- [13, 14, 15, -1]]))
- b[iu3] = -1
- assert_array_equal(b,
- array([[-1, -1, -1, -1, -1],
- [6, -1, -1, -1, -1],
- [11, 12, -1, -1, -1],
- [16, 17, 18, -1, -1]]))
-
- # These cover almost the whole array (two diagonals right of the
- # main one):
- a[iu2] = -10
- assert_array_equal(a,
- array([[-1, -1, -10, -10],
- [5, -1, -1, -10],
- [9, 10, -1, -1],
- [13, 14, 15, -1]]))
- b[iu4] = -10
- assert_array_equal(b,
- array([[-1, -1, -10, -10, -10],
- [6, -1, -1, -10, -10],
- [11, 12, -1, -1, -10],
- [16, 17, 18, -1, -1]]))
-
-
-class TestTrilIndicesFrom:
- def test_exceptions(self):
- assert_raises(ValueError, tril_indices_from, np.ones((2,)))
- assert_raises(ValueError, tril_indices_from, np.ones((2, 2, 2)))
- # assert_raises(ValueError, tril_indices_from, np.ones((2, 3)))
-
-
-class TestTriuIndicesFrom:
- def test_exceptions(self):
- assert_raises(ValueError, triu_indices_from, np.ones((2,)))
- assert_raises(ValueError, triu_indices_from, np.ones((2, 2, 2)))
- # assert_raises(ValueError, triu_indices_from, np.ones((2, 3)))
-
-
-class TestVander:
- def test_basic(self):
- c = np.array([0, 1, -2, 3])
- v = vander(c)
- powers = np.array([[0, 0, 0, 0, 1],
- [1, 1, 1, 1, 1],
- [16, -8, 4, -2, 1],
- [81, 27, 9, 3, 1]])
- # Check default value of N:
- assert_array_equal(v, powers[:, 1:])
- # Check a range of N values, including 0 and 5 (greater than default)
- m = powers.shape[1]
- for n in range(6):
- v = vander(c, N=n)
- assert_array_equal(v, powers[:, m-n:m])
-
- def test_dtypes(self):
- c = array([11, -12, 13], dtype=np.int8)
- v = vander(c)
- expected = np.array([[121, 11, 1],
- [144, -12, 1],
- [169, 13, 1]])
- assert_array_equal(v, expected)
-
- c = array([1.0+1j, 1.0-1j])
- v = vander(c, N=3)
- expected = np.array([[2j, 1+1j, 1],
- [-2j, 1-1j, 1]])
- # The data is floating point, but the values are small integers,
- # so assert_array_equal *should* be safe here (rather than, say,
- # assert_array_almost_equal).
- assert_array_equal(v, expected)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/reshape/melt.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/reshape/melt.py
deleted file mode 100644
index 74e6a6a28ccb01b8ca0d52944bd385bfae706582..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/reshape/melt.py
+++ /dev/null
@@ -1,533 +0,0 @@
-from __future__ import annotations
-
-import re
-from typing import TYPE_CHECKING
-
-import numpy as np
-
-from pandas.util._decorators import Appender
-
-from pandas.core.dtypes.common import is_list_like
-from pandas.core.dtypes.concat import concat_compat
-from pandas.core.dtypes.missing import notna
-
-import pandas.core.algorithms as algos
-from pandas.core.arrays import Categorical
-import pandas.core.common as com
-from pandas.core.indexes.api import (
- Index,
- MultiIndex,
-)
-from pandas.core.reshape.concat import concat
-from pandas.core.reshape.util import tile_compat
-from pandas.core.shared_docs import _shared_docs
-from pandas.core.tools.numeric import to_numeric
-
-if TYPE_CHECKING:
- from collections.abc import Hashable
-
- from pandas._typing import AnyArrayLike
-
- from pandas import DataFrame
-
-
-@Appender(_shared_docs["melt"] % {"caller": "pd.melt(df, ", "other": "DataFrame.melt"})
-def melt(
- frame: DataFrame,
- id_vars=None,
- value_vars=None,
- var_name=None,
- value_name: Hashable = "value",
- col_level=None,
- ignore_index: bool = True,
-) -> DataFrame:
- # If multiindex, gather names of columns on all level for checking presence
- # of `id_vars` and `value_vars`
- if isinstance(frame.columns, MultiIndex):
- cols = [x for c in frame.columns for x in c]
- else:
- cols = list(frame.columns)
-
- if value_name in frame.columns:
- raise ValueError(
- f"value_name ({value_name}) cannot match an element in "
- "the DataFrame columns."
- )
-
- if id_vars is not None:
- if not is_list_like(id_vars):
- id_vars = [id_vars]
- elif isinstance(frame.columns, MultiIndex) and not isinstance(id_vars, list):
- raise ValueError(
- "id_vars must be a list of tuples when columns are a MultiIndex"
- )
- else:
- # Check that `id_vars` are in frame
- id_vars = list(id_vars)
- missing = Index(com.flatten(id_vars)).difference(cols)
- if not missing.empty:
- raise KeyError(
- "The following 'id_vars' are not present "
- f"in the DataFrame: {list(missing)}"
- )
- else:
- id_vars = []
-
- if value_vars is not None:
- if not is_list_like(value_vars):
- value_vars = [value_vars]
- elif isinstance(frame.columns, MultiIndex) and not isinstance(value_vars, list):
- raise ValueError(
- "value_vars must be a list of tuples when columns are a MultiIndex"
- )
- else:
- value_vars = list(value_vars)
- # Check that `value_vars` are in frame
- missing = Index(com.flatten(value_vars)).difference(cols)
- if not missing.empty:
- raise KeyError(
- "The following 'value_vars' are not present in "
- f"the DataFrame: {list(missing)}"
- )
- if col_level is not None:
- idx = frame.columns.get_level_values(col_level).get_indexer(
- id_vars + value_vars
- )
- else:
- idx = algos.unique(frame.columns.get_indexer_for(id_vars + value_vars))
- frame = frame.iloc[:, idx]
- else:
- frame = frame.copy()
-
- if col_level is not None: # allow list or other?
- # frame is a copy
- frame.columns = frame.columns.get_level_values(col_level)
-
- if var_name is None:
- if isinstance(frame.columns, MultiIndex):
- if len(frame.columns.names) == len(set(frame.columns.names)):
- var_name = frame.columns.names
- else:
- var_name = [f"variable_{i}" for i in range(len(frame.columns.names))]
- else:
- var_name = [
- frame.columns.name if frame.columns.name is not None else "variable"
- ]
- if isinstance(var_name, str):
- var_name = [var_name]
-
- N, K = frame.shape
- K -= len(id_vars)
-
- mdata: dict[Hashable, AnyArrayLike] = {}
- for col in id_vars:
- id_data = frame.pop(col)
- if not isinstance(id_data.dtype, np.dtype):
- # i.e. ExtensionDtype
- if K > 0:
- mdata[col] = concat([id_data] * K, ignore_index=True)
- else:
- # We can't concat empty list. (GH 46044)
- mdata[col] = type(id_data)([], name=id_data.name, dtype=id_data.dtype)
- else:
- mdata[col] = np.tile(id_data._values, K)
-
- mcolumns = id_vars + var_name + [value_name]
-
- if frame.shape[1] > 0:
- mdata[value_name] = concat(
- [frame.iloc[:, i] for i in range(frame.shape[1])]
- ).values
- else:
- mdata[value_name] = frame._values.ravel("F")
- for i, col in enumerate(var_name):
- mdata[col] = frame.columns._get_level_values(i).repeat(N)
-
- result = frame._constructor(mdata, columns=mcolumns)
-
- if not ignore_index:
- result.index = tile_compat(frame.index, K)
-
- return result
-
-
-def lreshape(data: DataFrame, groups, dropna: bool = True) -> DataFrame:
- """
- Reshape wide-format data to long. Generalized inverse of DataFrame.pivot.
-
- Accepts a dictionary, ``groups``, in which each key is a new column name
- and each value is a list of old column names that will be "melted" under
- the new column name as part of the reshape.
-
- Parameters
- ----------
- data : DataFrame
- The wide-format DataFrame.
- groups : dict
- {new_name : list_of_columns}.
- dropna : bool, default True
- Do not include columns whose entries are all NaN.
-
- Returns
- -------
- DataFrame
- Reshaped DataFrame.
-
- See Also
- --------
- melt : Unpivot a DataFrame from wide to long format, optionally leaving
- identifiers set.
- pivot : Create a spreadsheet-style pivot table as a DataFrame.
- DataFrame.pivot : Pivot without aggregation that can handle
- non-numeric data.
- DataFrame.pivot_table : Generalization of pivot that can handle
- duplicate values for one index/column pair.
- DataFrame.unstack : Pivot based on the index values instead of a
- column.
- wide_to_long : Wide panel to long format. Less flexible but more
- user-friendly than melt.
-
- Examples
- --------
- >>> data = pd.DataFrame({'hr1': [514, 573], 'hr2': [545, 526],
- ... 'team': ['Red Sox', 'Yankees'],
- ... 'year1': [2007, 2007], 'year2': [2008, 2008]})
- >>> data
- hr1 hr2 team year1 year2
- 0 514 545 Red Sox 2007 2008
- 1 573 526 Yankees 2007 2008
-
- >>> pd.lreshape(data, {'year': ['year1', 'year2'], 'hr': ['hr1', 'hr2']})
- team year hr
- 0 Red Sox 2007 514
- 1 Yankees 2007 573
- 2 Red Sox 2008 545
- 3 Yankees 2008 526
- """
- if isinstance(groups, dict):
- keys = list(groups.keys())
- values = list(groups.values())
- else:
- keys, values = zip(*groups)
-
- all_cols = list(set.union(*(set(x) for x in values)))
- id_cols = list(data.columns.difference(all_cols))
-
- K = len(values[0])
-
- for seq in values:
- if len(seq) != K:
- raise ValueError("All column lists must be same length")
-
- mdata = {}
- pivot_cols = []
-
- for target, names in zip(keys, values):
- to_concat = [data[col]._values for col in names]
-
- mdata[target] = concat_compat(to_concat)
- pivot_cols.append(target)
-
- for col in id_cols:
- mdata[col] = np.tile(data[col]._values, K)
-
- if dropna:
- mask = np.ones(len(mdata[pivot_cols[0]]), dtype=bool)
- for c in pivot_cols:
- mask &= notna(mdata[c])
- if not mask.all():
- mdata = {k: v[mask] for k, v in mdata.items()}
-
- return data._constructor(mdata, columns=id_cols + pivot_cols)
-
-
-def wide_to_long(
- df: DataFrame, stubnames, i, j, sep: str = "", suffix: str = r"\d+"
-) -> DataFrame:
- r"""
- Unpivot a DataFrame from wide to long format.
-
- Less flexible but more user-friendly than melt.
-
- With stubnames ['A', 'B'], this function expects to find one or more
- group of columns with format
- A-suffix1, A-suffix2,..., B-suffix1, B-suffix2,...
- You specify what you want to call this suffix in the resulting long format
- with `j` (for example `j='year'`)
-
- Each row of these wide variables are assumed to be uniquely identified by
- `i` (can be a single column name or a list of column names)
-
- All remaining variables in the data frame are left intact.
-
- Parameters
- ----------
- df : DataFrame
- The wide-format DataFrame.
- stubnames : str or list-like
- The stub name(s). The wide format variables are assumed to
- start with the stub names.
- i : str or list-like
- Column(s) to use as id variable(s).
- j : str
- The name of the sub-observation variable. What you wish to name your
- suffix in the long format.
- sep : str, default ""
- A character indicating the separation of the variable names
- in the wide format, to be stripped from the names in the long format.
- For example, if your column names are A-suffix1, A-suffix2, you
- can strip the hyphen by specifying `sep='-'`.
- suffix : str, default '\\d+'
- A regular expression capturing the wanted suffixes. '\\d+' captures
- numeric suffixes. Suffixes with no numbers could be specified with the
- negated character class '\\D+'. You can also further disambiguate
- suffixes, for example, if your wide variables are of the form A-one,
- B-two,.., and you have an unrelated column A-rating, you can ignore the
- last one by specifying `suffix='(!?one|two)'`. When all suffixes are
- numeric, they are cast to int64/float64.
-
- Returns
- -------
- DataFrame
- A DataFrame that contains each stub name as a variable, with new index
- (i, j).
-
- See Also
- --------
- melt : Unpivot a DataFrame from wide to long format, optionally leaving
- identifiers set.
- pivot : Create a spreadsheet-style pivot table as a DataFrame.
- DataFrame.pivot : Pivot without aggregation that can handle
- non-numeric data.
- DataFrame.pivot_table : Generalization of pivot that can handle
- duplicate values for one index/column pair.
- DataFrame.unstack : Pivot based on the index values instead of a
- column.
-
- Notes
- -----
- All extra variables are left untouched. This simply uses
- `pandas.melt` under the hood, but is hard-coded to "do the right thing"
- in a typical case.
-
- Examples
- --------
- >>> np.random.seed(123)
- >>> df = pd.DataFrame({"A1970" : {0 : "a", 1 : "b", 2 : "c"},
- ... "A1980" : {0 : "d", 1 : "e", 2 : "f"},
- ... "B1970" : {0 : 2.5, 1 : 1.2, 2 : .7},
- ... "B1980" : {0 : 3.2, 1 : 1.3, 2 : .1},
- ... "X" : dict(zip(range(3), np.random.randn(3)))
- ... })
- >>> df["id"] = df.index
- >>> df
- A1970 A1980 B1970 B1980 X id
- 0 a d 2.5 3.2 -1.085631 0
- 1 b e 1.2 1.3 0.997345 1
- 2 c f 0.7 0.1 0.282978 2
- >>> pd.wide_to_long(df, ["A", "B"], i="id", j="year")
- ... # doctest: +NORMALIZE_WHITESPACE
- X A B
- id year
- 0 1970 -1.085631 a 2.5
- 1 1970 0.997345 b 1.2
- 2 1970 0.282978 c 0.7
- 0 1980 -1.085631 d 3.2
- 1 1980 0.997345 e 1.3
- 2 1980 0.282978 f 0.1
-
- With multiple id columns
-
- >>> df = pd.DataFrame({
- ... 'famid': [1, 1, 1, 2, 2, 2, 3, 3, 3],
- ... 'birth': [1, 2, 3, 1, 2, 3, 1, 2, 3],
- ... 'ht1': [2.8, 2.9, 2.2, 2, 1.8, 1.9, 2.2, 2.3, 2.1],
- ... 'ht2': [3.4, 3.8, 2.9, 3.2, 2.8, 2.4, 3.3, 3.4, 2.9]
- ... })
- >>> df
- famid birth ht1 ht2
- 0 1 1 2.8 3.4
- 1 1 2 2.9 3.8
- 2 1 3 2.2 2.9
- 3 2 1 2.0 3.2
- 4 2 2 1.8 2.8
- 5 2 3 1.9 2.4
- 6 3 1 2.2 3.3
- 7 3 2 2.3 3.4
- 8 3 3 2.1 2.9
- >>> l = pd.wide_to_long(df, stubnames='ht', i=['famid', 'birth'], j='age')
- >>> l
- ... # doctest: +NORMALIZE_WHITESPACE
- ht
- famid birth age
- 1 1 1 2.8
- 2 3.4
- 2 1 2.9
- 2 3.8
- 3 1 2.2
- 2 2.9
- 2 1 1 2.0
- 2 3.2
- 2 1 1.8
- 2 2.8
- 3 1 1.9
- 2 2.4
- 3 1 1 2.2
- 2 3.3
- 2 1 2.3
- 2 3.4
- 3 1 2.1
- 2 2.9
-
- Going from long back to wide just takes some creative use of `unstack`
-
- >>> w = l.unstack()
- >>> w.columns = w.columns.map('{0[0]}{0[1]}'.format)
- >>> w.reset_index()
- famid birth ht1 ht2
- 0 1 1 2.8 3.4
- 1 1 2 2.9 3.8
- 2 1 3 2.2 2.9
- 3 2 1 2.0 3.2
- 4 2 2 1.8 2.8
- 5 2 3 1.9 2.4
- 6 3 1 2.2 3.3
- 7 3 2 2.3 3.4
- 8 3 3 2.1 2.9
-
- Less wieldy column names are also handled
-
- >>> np.random.seed(0)
- >>> df = pd.DataFrame({'A(weekly)-2010': np.random.rand(3),
- ... 'A(weekly)-2011': np.random.rand(3),
- ... 'B(weekly)-2010': np.random.rand(3),
- ... 'B(weekly)-2011': np.random.rand(3),
- ... 'X' : np.random.randint(3, size=3)})
- >>> df['id'] = df.index
- >>> df # doctest: +NORMALIZE_WHITESPACE, +ELLIPSIS
- A(weekly)-2010 A(weekly)-2011 B(weekly)-2010 B(weekly)-2011 X id
- 0 0.548814 0.544883 0.437587 0.383442 0 0
- 1 0.715189 0.423655 0.891773 0.791725 1 1
- 2 0.602763 0.645894 0.963663 0.528895 1 2
-
- >>> pd.wide_to_long(df, ['A(weekly)', 'B(weekly)'], i='id',
- ... j='year', sep='-')
- ... # doctest: +NORMALIZE_WHITESPACE
- X A(weekly) B(weekly)
- id year
- 0 2010 0 0.548814 0.437587
- 1 2010 1 0.715189 0.891773
- 2 2010 1 0.602763 0.963663
- 0 2011 0 0.544883 0.383442
- 1 2011 1 0.423655 0.791725
- 2 2011 1 0.645894 0.528895
-
- If we have many columns, we could also use a regex to find our
- stubnames and pass that list on to wide_to_long
-
- >>> stubnames = sorted(
- ... set([match[0] for match in df.columns.str.findall(
- ... r'[A-B]\(.*\)').values if match != []])
- ... )
- >>> list(stubnames)
- ['A(weekly)', 'B(weekly)']
-
- All of the above examples have integers as suffixes. It is possible to
- have non-integers as suffixes.
-
- >>> df = pd.DataFrame({
- ... 'famid': [1, 1, 1, 2, 2, 2, 3, 3, 3],
- ... 'birth': [1, 2, 3, 1, 2, 3, 1, 2, 3],
- ... 'ht_one': [2.8, 2.9, 2.2, 2, 1.8, 1.9, 2.2, 2.3, 2.1],
- ... 'ht_two': [3.4, 3.8, 2.9, 3.2, 2.8, 2.4, 3.3, 3.4, 2.9]
- ... })
- >>> df
- famid birth ht_one ht_two
- 0 1 1 2.8 3.4
- 1 1 2 2.9 3.8
- 2 1 3 2.2 2.9
- 3 2 1 2.0 3.2
- 4 2 2 1.8 2.8
- 5 2 3 1.9 2.4
- 6 3 1 2.2 3.3
- 7 3 2 2.3 3.4
- 8 3 3 2.1 2.9
-
- >>> l = pd.wide_to_long(df, stubnames='ht', i=['famid', 'birth'], j='age',
- ... sep='_', suffix=r'\w+')
- >>> l
- ... # doctest: +NORMALIZE_WHITESPACE
- ht
- famid birth age
- 1 1 one 2.8
- two 3.4
- 2 one 2.9
- two 3.8
- 3 one 2.2
- two 2.9
- 2 1 one 2.0
- two 3.2
- 2 one 1.8
- two 2.8
- 3 one 1.9
- two 2.4
- 3 1 one 2.2
- two 3.3
- 2 one 2.3
- two 3.4
- 3 one 2.1
- two 2.9
- """
-
- def get_var_names(df, stub: str, sep: str, suffix: str) -> list[str]:
- regex = rf"^{re.escape(stub)}{re.escape(sep)}{suffix}$"
- pattern = re.compile(regex)
- return [col for col in df.columns if pattern.match(col)]
-
- def melt_stub(df, stub: str, i, j, value_vars, sep: str):
- newdf = melt(
- df,
- id_vars=i,
- value_vars=value_vars,
- value_name=stub.rstrip(sep),
- var_name=j,
- )
- newdf[j] = Categorical(newdf[j])
- newdf[j] = newdf[j].str.replace(re.escape(stub + sep), "", regex=True)
-
- # GH17627 Cast numerics suffixes to int/float
- newdf[j] = to_numeric(newdf[j], errors="ignore")
-
- return newdf.set_index(i + [j])
-
- if not is_list_like(stubnames):
- stubnames = [stubnames]
- else:
- stubnames = list(stubnames)
-
- if any(col in stubnames for col in df.columns):
- raise ValueError("stubname can't be identical to a column name")
-
- if not is_list_like(i):
- i = [i]
- else:
- i = list(i)
-
- if df[i].duplicated().any():
- raise ValueError("the id variables need to uniquely identify each row")
-
- value_vars = [get_var_names(df, stub, sep, suffix) for stub in stubnames]
-
- value_vars_flattened = [e for sublist in value_vars for e in sublist]
- id_vars = list(set(df.columns.tolist()).difference(value_vars_flattened))
-
- _melted = [melt_stub(df, s, i, j, v, sep) for s, v in zip(stubnames, value_vars)]
- melted = _melted[0].join(_melted[1:], how="outer")
-
- if len(i) == 1:
- new = df[id_vars].set_index(i).join(melted)
- return new
-
- new = df[id_vars].merge(melted.reset_index(), on=i).set_index(i + [j])
-
- return new
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_combine.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_combine.py
deleted file mode 100644
index 75d47e3daa10339f4c4cc7b35c52f24bbb20277a..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_combine.py
+++ /dev/null
@@ -1,17 +0,0 @@
-from pandas import Series
-import pandas._testing as tm
-
-
-class TestCombine:
- def test_combine_scalar(self):
- # GH#21248
- # Note - combine() with another Series is tested elsewhere because
- # it is used when testing operators
- ser = Series([i * 10 for i in range(5)])
- result = ser.combine(3, lambda x, y: x + y)
- expected = Series([i * 10 + 3 for i in range(5)])
- tm.assert_series_equal(result, expected)
-
- result = ser.combine(22, lambda x, y: min(x, y))
- expected = Series([min(i * 10, 22) for i in range(5)])
- tm.assert_series_equal(result, expected)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/sphinxext.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/sphinxext.py
deleted file mode 100644
index f935688f1ca66303ba186ffc123afeaa69489b42..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/sphinxext.py
+++ /dev/null
@@ -1,239 +0,0 @@
-"""
- pygments.sphinxext
- ~~~~~~~~~~~~~~~~~~
-
- Sphinx extension to generate automatic documentation of lexers,
- formatters and filters.
-
- :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-import sys
-
-from docutils import nodes
-from docutils.statemachine import ViewList
-from docutils.parsers.rst import Directive
-from sphinx.util.nodes import nested_parse_with_titles
-
-
-MODULEDOC = '''
-.. module:: %s
-
-%s
-%s
-'''
-
-LEXERDOC = '''
-.. class:: %s
-
- :Short names: %s
- :Filenames: %s
- :MIME types: %s
-
- %s
-
-'''
-
-FMTERDOC = '''
-.. class:: %s
-
- :Short names: %s
- :Filenames: %s
-
- %s
-
-'''
-
-FILTERDOC = '''
-.. class:: %s
-
- :Name: %s
-
- %s
-
-'''
-
-
-class PygmentsDoc(Directive):
- """
- A directive to collect all lexers/formatters/filters and generate
- autoclass directives for them.
- """
- has_content = False
- required_arguments = 1
- optional_arguments = 0
- final_argument_whitespace = False
- option_spec = {}
-
- def run(self):
- self.filenames = set()
- if self.arguments[0] == 'lexers':
- out = self.document_lexers()
- elif self.arguments[0] == 'formatters':
- out = self.document_formatters()
- elif self.arguments[0] == 'filters':
- out = self.document_filters()
- elif self.arguments[0] == 'lexers_overview':
- out = self.document_lexers_overview()
- else:
- raise Exception('invalid argument for "pygmentsdoc" directive')
- node = nodes.compound()
- vl = ViewList(out.split('\n'), source='')
- nested_parse_with_titles(self.state, vl, node)
- for fn in self.filenames:
- self.state.document.settings.record_dependencies.add(fn)
- return node.children
-
- def document_lexers_overview(self):
- """Generate a tabular overview of all lexers.
-
- The columns are the lexer name, the extensions handled by this lexer
- (or "None"), the aliases and a link to the lexer class."""
- from pygments.lexers._mapping import LEXERS
- import pygments.lexers
- out = []
-
- table = []
-
- def format_link(name, url):
- if url:
- return f'`{name} <{url}>`_'
- return name
-
- for classname, data in sorted(LEXERS.items(), key=lambda x: x[1][1].lower()):
- lexer_cls = pygments.lexers.find_lexer_class(data[1])
- extensions = lexer_cls.filenames + lexer_cls.alias_filenames
-
- table.append({
- 'name': format_link(data[1], lexer_cls.url),
- 'extensions': ', '.join(extensions).replace('*', '\\*').replace('_', '\\') or 'None',
- 'aliases': ', '.join(data[2]),
- 'class': f'{data[0]}.{classname}'
- })
-
- column_names = ['name', 'extensions', 'aliases', 'class']
- column_lengths = [max([len(row[column]) for row in table if row[column]])
- for column in column_names]
-
- def write_row(*columns):
- """Format a table row"""
- out = []
- for l, c in zip(column_lengths, columns):
- if c:
- out.append(c.ljust(l))
- else:
- out.append(' '*l)
-
- return ' '.join(out)
-
- def write_seperator():
- """Write a table separator row"""
- sep = ['='*c for c in column_lengths]
- return write_row(*sep)
-
- out.append(write_seperator())
- out.append(write_row('Name', 'Extension(s)', 'Short name(s)', 'Lexer class'))
- out.append(write_seperator())
- for row in table:
- out.append(write_row(
- row['name'],
- row['extensions'],
- row['aliases'],
- f':class:`~{row["class"]}`'))
- out.append(write_seperator())
-
- return '\n'.join(out)
-
- def document_lexers(self):
- from pygments.lexers._mapping import LEXERS
- import pygments
- import inspect
- import pathlib
-
- out = []
- modules = {}
- moduledocstrings = {}
- for classname, data in sorted(LEXERS.items(), key=lambda x: x[0]):
- module = data[0]
- mod = __import__(module, None, None, [classname])
- self.filenames.add(mod.__file__)
- cls = getattr(mod, classname)
- if not cls.__doc__:
- print("Warning: %s does not have a docstring." % classname)
- docstring = cls.__doc__
- if isinstance(docstring, bytes):
- docstring = docstring.decode('utf8')
-
- example_file = getattr(cls, '_example', None)
- if example_file:
- p = pathlib.Path(inspect.getabsfile(pygments)).parent.parent /\
- 'tests' / 'examplefiles' / example_file
- content = p.read_text(encoding='utf-8')
- if not content:
- raise Exception(
- f"Empty example file '{example_file}' for lexer "
- f"{classname}")
-
- if data[2]:
- lexer_name = data[2][0]
- docstring += '\n\n .. admonition:: Example\n'
- docstring += f'\n .. code-block:: {lexer_name}\n\n'
- for line in content.splitlines():
- docstring += f' {line}\n'
-
- modules.setdefault(module, []).append((
- classname,
- ', '.join(data[2]) or 'None',
- ', '.join(data[3]).replace('*', '\\*').replace('_', '\\') or 'None',
- ', '.join(data[4]) or 'None',
- docstring))
- if module not in moduledocstrings:
- moddoc = mod.__doc__
- if isinstance(moddoc, bytes):
- moddoc = moddoc.decode('utf8')
- moduledocstrings[module] = moddoc
-
- for module, lexers in sorted(modules.items(), key=lambda x: x[0]):
- if moduledocstrings[module] is None:
- raise Exception("Missing docstring for %s" % (module,))
- heading = moduledocstrings[module].splitlines()[4].strip().rstrip('.')
- out.append(MODULEDOC % (module, heading, '-'*len(heading)))
- for data in lexers:
- out.append(LEXERDOC % data)
-
- return ''.join(out)
-
- def document_formatters(self):
- from pygments.formatters import FORMATTERS
-
- out = []
- for classname, data in sorted(FORMATTERS.items(), key=lambda x: x[0]):
- module = data[0]
- mod = __import__(module, None, None, [classname])
- self.filenames.add(mod.__file__)
- cls = getattr(mod, classname)
- docstring = cls.__doc__
- if isinstance(docstring, bytes):
- docstring = docstring.decode('utf8')
- heading = cls.__name__
- out.append(FMTERDOC % (heading, ', '.join(data[2]) or 'None',
- ', '.join(data[3]).replace('*', '\\*') or 'None',
- docstring))
- return ''.join(out)
-
- def document_filters(self):
- from pygments.filters import FILTERS
-
- out = []
- for name, cls in FILTERS.items():
- self.filenames.add(sys.modules[cls.__module__].__file__)
- docstring = cls.__doc__
- if isinstance(docstring, bytes):
- docstring = docstring.decode('utf8')
- out.append(FILTERDOC % (cls.__name__, name, docstring))
- return ''.join(out)
-
-
-def setup(app):
- app.add_directive('pygmentsdoc', PygmentsDoc)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/starlette/config.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/starlette/config.py
deleted file mode 100644
index 795232cf642a3c7ca107fbeb47bb74f854ac82a4..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/starlette/config.py
+++ /dev/null
@@ -1,149 +0,0 @@
-import os
-import typing
-from collections.abc import MutableMapping
-from pathlib import Path
-
-
-class undefined:
- pass
-
-
-class EnvironError(Exception):
- pass
-
-
-class Environ(MutableMapping):
- def __init__(self, environ: typing.MutableMapping = os.environ):
- self._environ = environ
- self._has_been_read: typing.Set[typing.Any] = set()
-
- def __getitem__(self, key: typing.Any) -> typing.Any:
- self._has_been_read.add(key)
- return self._environ.__getitem__(key)
-
- def __setitem__(self, key: typing.Any, value: typing.Any) -> None:
- if key in self._has_been_read:
- raise EnvironError(
- f"Attempting to set environ['{key}'], but the value has already been "
- "read."
- )
- self._environ.__setitem__(key, value)
-
- def __delitem__(self, key: typing.Any) -> None:
- if key in self._has_been_read:
- raise EnvironError(
- f"Attempting to delete environ['{key}'], but the value has already "
- "been read."
- )
- self._environ.__delitem__(key)
-
- def __iter__(self) -> typing.Iterator:
- return iter(self._environ)
-
- def __len__(self) -> int:
- return len(self._environ)
-
-
-environ = Environ()
-
-T = typing.TypeVar("T")
-
-
-class Config:
- def __init__(
- self,
- env_file: typing.Optional[typing.Union[str, Path]] = None,
- environ: typing.Mapping[str, str] = environ,
- env_prefix: str = "",
- ) -> None:
- self.environ = environ
- self.env_prefix = env_prefix
- self.file_values: typing.Dict[str, str] = {}
- if env_file is not None and os.path.isfile(env_file):
- self.file_values = self._read_file(env_file)
-
- @typing.overload
- def __call__(self, key: str, *, default: None) -> typing.Optional[str]:
- ...
-
- @typing.overload
- def __call__(self, key: str, cast: typing.Type[T], default: T = ...) -> T:
- ...
-
- @typing.overload
- def __call__(
- self, key: str, cast: typing.Type[str] = ..., default: str = ...
- ) -> str:
- ...
-
- @typing.overload
- def __call__(
- self,
- key: str,
- cast: typing.Callable[[typing.Any], T] = ...,
- default: typing.Any = ...,
- ) -> T:
- ...
-
- @typing.overload
- def __call__(
- self, key: str, cast: typing.Type[str] = ..., default: T = ...
- ) -> typing.Union[T, str]:
- ...
-
- def __call__(
- self,
- key: str,
- cast: typing.Optional[typing.Callable] = None,
- default: typing.Any = undefined,
- ) -> typing.Any:
- return self.get(key, cast, default)
-
- def get(
- self,
- key: str,
- cast: typing.Optional[typing.Callable] = None,
- default: typing.Any = undefined,
- ) -> typing.Any:
- key = self.env_prefix + key
- if key in self.environ:
- value = self.environ[key]
- return self._perform_cast(key, value, cast)
- if key in self.file_values:
- value = self.file_values[key]
- return self._perform_cast(key, value, cast)
- if default is not undefined:
- return self._perform_cast(key, default, cast)
- raise KeyError(f"Config '{key}' is missing, and has no default.")
-
- def _read_file(self, file_name: typing.Union[str, Path]) -> typing.Dict[str, str]:
- file_values: typing.Dict[str, str] = {}
- with open(file_name) as input_file:
- for line in input_file.readlines():
- line = line.strip()
- if "=" in line and not line.startswith("#"):
- key, value = line.split("=", 1)
- key = key.strip()
- value = value.strip().strip("\"'")
- file_values[key] = value
- return file_values
-
- def _perform_cast(
- self, key: str, value: typing.Any, cast: typing.Optional[typing.Callable] = None
- ) -> typing.Any:
- if cast is None or value is None:
- return value
- elif cast is bool and isinstance(value, str):
- mapping = {"true": True, "1": True, "false": False, "0": False}
- value = value.lower()
- if value not in mapping:
- raise ValueError(
- f"Config '{key}' has value '{value}'. Not a valid bool."
- )
- return mapping[value]
- try:
- return cast(value)
- except (TypeError, ValueError):
- raise ValueError(
- f"Config '{key}' has value '{value}'. Not a valid {cast.__name__}."
- )
diff --git a/spaces/propilot/seo-powered-by-ia/cases/content_generation.py b/spaces/propilot/seo-powered-by-ia/cases/content_generation.py
deleted file mode 100644
index c3ee06ac031fe3de59e3ee4f6c861a93ca043b0e..0000000000000000000000000000000000000000
--- a/spaces/propilot/seo-powered-by-ia/cases/content_generation.py
+++ /dev/null
@@ -1,96 +0,0 @@
-from .monitoring import HEADERS
-import streamlit as st
-from langchain import LLMChain
-from langchain.chat_models import ChatOpenAI
-from langchain.prompts.chat import (
- ChatPromptTemplate,
- SystemMessagePromptTemplate,
- HumanMessagePromptTemplate,
-)
-
-
-CREATIVE_TEMPLATE = """Debes actuar como un agente experto en SEO y Marketing Digital, y utilizando tus habilidades y conocimientos deberás proporcionar ideas innovadoras para la creación de contenido basado en las necesidades del usuario.\n
-Analiza la descripción proporcionada y propone varias ideas de temas, ángulos o enfoques que podrían ser interesantes y atractivos para la audiencia, manteniendo siempre un enfoque de optimización SEO.
-"""
-
-GENERATIVE_TEMPLATE = """Debes actuar como un agente experto en SEO y Marketing Digital, y utilizando tus habilidades y conocimientos deberás generar contenido basado en las ideas y necesidades del usuario optimizado para SEO.\n
-1. Generar un esquema, esbozando la estructura del contenido a generar.\n
-2. Escritura del contenido: Desarrolla cada sección de tu esquema en párrafos completos.\n
-3. Inclusión de emojies: Los emojis pueden hacer que tu contenido sea más atractivo y amigable.\n
-4. Optimización SEO: Asegúrate de que tus palabras clave aparecen en los lugares importantes de tu contenido.\n
-5. Análisis: Debes analizar el contenido generado con el fin de identificar palabras claves que puedan optimizarse para SEO, puntos de mejora en la estructura y detalle del contenido.
-"""
-
-def handle_seo_action(template, action, action_text, model, api_key, creativity_level=None):
- if api_key:
- if template:
- with st.spinner(f'{action_text}...'):
- if creativity_level:
- return action(None, model, api_key, creativity_level, template)
- return action(None, model, api_key, template)
- else:
- st.warning('Please enter some template to generate.')
- else:
- st.warning('Please enter your API Key.')
- return None
-
-
-def get_prompt_template(mode, include_emojis, tone, user_request):
- base_template = CREATIVE_TEMPLATE if mode == "Lluvia de Ideas" else GENERATIVE_TEMPLATE
- if include_emojis:
- base_template += "\n\nIncluye emojis en tu contenido para hacerlo más atractivo."
- if tone == 'Formal':
- base_template += "\n\nEl tono del contenido debe ser formal."
-
- if user_request:
- base_template += f"\n\nSolicitud del usuario: {user_request}"
-
- return base_template
-
-def display_content_generation(api_key, model):
- st.title("Generación de Contenido Digital")
-
- # Agregamos las opciones personalizables
- st.markdown("Personaliza tu contenido:")
- mode = st.radio("Modo", ['Lluvia de Ideas', 'Generador de Contenido'])
- include_emojis = st.checkbox("Incluir emojis")
- tone = st.selectbox("Tono del contenido", ['Formal', 'Informal', 'Amigable', 'Profesional'])
-
- st.markdown("Selecciona el nivel de creatividad:")
- creativity_level = st.slider("Nivel de Creatividad", min_value=0.0, max_value=1.0, value=0.5, step=0.1)
-
- # Allow the user to modify the template and enter user request
- st.markdown("Modifica las instrucciones del bot si lo deseas:")
- user_request = st.text_input("Solicitud del usuario")
- _ = st.text_area("Previsualización de Solicitud", get_prompt_template(mode, include_emojis, tone, user_request), height=200)
-
- if st.button("Generar"):
- # Combine template and user request
- template_with_request = get_prompt_template(mode, include_emojis, tone, user_request)
-
- # Pass the template to handle_seo_action function
- generated_content = handle_seo_action(template_with_request, content_generation_with_langchain, 'Creando el contenido optimizado para SEO', model, api_key, creativity_level)
-
- if generated_content:
- st.success('Generación de contenido completada.')
- st.markdown("**Contenido generado:**")
- st.markdown(f"> {generated_content}")
-
-
-def content_generation_with_langchain(content, model, openai_key, creativity_level, template):
- chat = ChatOpenAI(
- model=model,
- temperature=creativity_level,
- openai_api_key=openai_key,
- headers=HEADERS if HEADERS else None,
- )
-
- system_message_prompt = SystemMessagePromptTemplate.from_template(template)
- human_template = "{content}"
- human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
- chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
-
- chain = LLMChain(llm=chat, prompt=chat_prompt)
- optimized_content = chain.run(content=content)
-
- return optimized_content
diff --git a/spaces/pseudolab/Balanced-News-Reading/app.py b/spaces/pseudolab/Balanced-News-Reading/app.py
deleted file mode 100644
index 0eb620589a945fc72d132e3182d52282e27ae31b..0000000000000000000000000000000000000000
--- a/spaces/pseudolab/Balanced-News-Reading/app.py
+++ /dev/null
@@ -1,117 +0,0 @@
-import gradio as gr
-from newspaper import Article
-from newspaper import Config
-
-from transformers import pipeline
-import requests
-from bs4 import BeautifulSoup
-import re
-
-from bs4 import BeautifulSoup as bs
-import requests
-from transformers import PreTrainedTokenizerFast, BartForConditionalGeneration
-
-# Load Model and Tokenize
-def get_summary(input_text):
- tokenizer = PreTrainedTokenizerFast.from_pretrained("ainize/kobart-news")
- summary_model = BartForConditionalGeneration.from_pretrained("ainize/kobart-news")
- input_ids = tokenizer.encode(input_text, return_tensors="pt")
- summary_text_ids = summary_model.generate(
- input_ids=input_ids,
- length_penalty=2,
- top_p=0.9,
- max_length=128,
- min_length=12,
- num_beams=2,
- )
- # "task_specific_params": {
- # "summarization": {
- # "length_penalty": 1.0,
- # "max_length": 128,
- # "min_length": 12,
- # "num_beams": 4
- # }
- return tokenizer.decode(summary_text_ids[0], skip_special_tokens=True)
-
-
-
-USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Firefox/78.0'
-config = Config()
-config.browser_user_agent = USER_AGENT
-config.request_timeout = 10
-
-class news_collector:
- def __init__(self):
- self.examples_text = []
-
-
- def get_new_parser(self, url):
- article = Article(url, language='ko')
- article.download()
- article.parse()
- return article
-
- def get_news_links(self, page=''):
- url = "https://news.daum.net/breakingnews/economic"
- response = requests.get(url)
- html_text = response.text
-
- soup = bs(response.text, 'html.parser')
- news_titles = soup.select("a.link_txt")
- links = [item.attrs['href'] for item in news_titles ]
- https_links = [item for item in links if item.startswith('https') == True]
- https_links
- return https_links
-
-
- def update_news_examples(self):
- news_links = self.get_news_links()
-
- for news_url in news_links:
- article = self.get_new_parser(news_url)
- if article.text:
- self.examples_text.append([get_summary(article.text[:1500]), news_url])
-
-
-title = "균형잡힌 뉴스 읽기 (Balanced News Reading)"
-
-
-
-with gr.Blocks(theme='pseudolab/huggingface-korea-theme') as demo:
-
- collector = news_collector()
- collector.update_news_examples()
-
- with gr.Tab("소개"):
- gr.Markdown(
- """
- # 균형잡힌 뉴스 읽기 (Balanced News Reading)
-
- 긍정적인 기사와 부정적인 기사인지 확인하여 뉴스를 읽을 수 있습니다. 최근 경제뉴스기사를 가져와 Example에서 바로 확인할 수 있도록 구성했습니다.
-
- ## 1. 사용방법
- Daum뉴스의 경제 기사를 가져와 내용을 요약하고 `Example`에 가져옵니다. 감정 분석을 하고 싶은 기사를 `Examples`에서 선택해서 `Submit`을 누르면 `Classification`에
- 해당 기사의 감정 평가 결과가 표시됩니다. 감정평가는 각 상태의 확률 정보와 함께 `neutral`, `positive`, `negative` 3가지로 표시됩니다.
-
- ## 2. 구조 설명
- 뉴스기사를 크롤링 및 요약 모델을 이용한 기사 요약 >> 기사 요약정보 Example에 추가 >> 한국어 fine-tunning한 감정평가 모델을 이용해 입력된 기사에 대한 감정 평가 진행
- """)
-
- with gr.Tab("데모"):
- Link_TXT = gr.Textbox(label="뉴스 내용", placeholder = "뉴스 기사 내용을 입력하세요.")
- gr.load("models/gabrielyang/finance_news_classifier-KR_v7",
- # gr.load("models/Hyeonseo/ko-finance_news_classifier",
- inputs = Link_TXT)
- Link_URL = gr.Textbox(label="뉴스 URL")
-
- # diable due to dynamic loading
- # update_button = gr.Button(value="뉴스 데이터 업데이트")
- # update_button.click(fn=collector.update_news_examples_and_update, inputs=None, outputs=None)
-
- gr.Examples(
- collector.examples_text,
- [Link_TXT, Link_URL],
- )
-
-if __name__ == "__main__":
- demo.launch()
\ No newline at end of file
diff --git a/spaces/pycoming/bingo/src/lib/hooks/use-at-bottom.tsx b/spaces/pycoming/bingo/src/lib/hooks/use-at-bottom.tsx
deleted file mode 100644
index d37c8cf4162adcb0064e08ecec24eb731416b045..0000000000000000000000000000000000000000
--- a/spaces/pycoming/bingo/src/lib/hooks/use-at-bottom.tsx
+++ /dev/null
@@ -1,23 +0,0 @@
-import * as React from 'react'
-
-export function useAtBottom(offset = 0) {
- const [isAtBottom, setIsAtBottom] = React.useState(false)
-
- React.useEffect(() => {
- const handleScroll = () => {
- setIsAtBottom(
- window.innerHeight + window.scrollY >=
- document.body.offsetHeight - offset
- )
- }
-
- window.addEventListener('scroll', handleScroll, { passive: true })
- handleScroll()
-
- return () => {
- window.removeEventListener('scroll', handleScroll)
- }
- }, [offset])
-
- return isAtBottom
-}
diff --git a/spaces/pyodide-demo/self-hosted/attrs.js b/spaces/pyodide-demo/self-hosted/attrs.js
deleted file mode 100644
index 5eae914be43c85d17088d41542d12d136b7540fe..0000000000000000000000000000000000000000
--- a/spaces/pyodide-demo/self-hosted/attrs.js
+++ /dev/null
@@ -1 +0,0 @@
-var Module=typeof globalThis.__pyodide_module!=="undefined"?globalThis.__pyodide_module:{};if(!Module.expectedDataFileDownloads){Module.expectedDataFileDownloads=0}Module.expectedDataFileDownloads++;(function(){var loadPackage=function(metadata){var PACKAGE_PATH="";if(typeof window==="object"){PACKAGE_PATH=window["encodeURIComponent"](window.location.pathname.toString().substring(0,window.location.pathname.toString().lastIndexOf("/"))+"/")}else if(typeof process==="undefined"&&typeof location!=="undefined"){PACKAGE_PATH=encodeURIComponent(location.pathname.toString().substring(0,location.pathname.toString().lastIndexOf("/"))+"/")}var PACKAGE_NAME="attrs.data";var REMOTE_PACKAGE_BASE="attrs.data";if(typeof Module["locateFilePackage"]==="function"&&!Module["locateFile"]){Module["locateFile"]=Module["locateFilePackage"];err("warning: you defined Module.locateFilePackage, that has been renamed to Module.locateFile (using your locateFilePackage for now)")}var REMOTE_PACKAGE_NAME=Module["locateFile"]?Module["locateFile"](REMOTE_PACKAGE_BASE,""):REMOTE_PACKAGE_BASE;var REMOTE_PACKAGE_SIZE=metadata["remote_package_size"];var PACKAGE_UUID=metadata["package_uuid"];function fetchRemotePackage(packageName,packageSize,callback,errback){if(typeof process==="object"){require("fs").readFile(packageName,(function(err,contents){if(err){errback(err)}else{callback(contents.buffer)}}));return}var xhr=new XMLHttpRequest;xhr.open("GET",packageName,true);xhr.responseType="arraybuffer";xhr.onprogress=function(event){var url=packageName;var size=packageSize;if(event.total)size=event.total;if(event.loaded){if(!xhr.addedTotal){xhr.addedTotal=true;if(!Module.dataFileDownloads)Module.dataFileDownloads={};Module.dataFileDownloads[url]={loaded:event.loaded,total:size}}else{Module.dataFileDownloads[url].loaded=event.loaded}var total=0;var loaded=0;var num=0;for(var download in Module.dataFileDownloads){var data=Module.dataFileDownloads[download];total+=data.total;loaded+=data.loaded;num++}total=Math.ceil(total*Module.expectedDataFileDownloads/num);if(Module["setStatus"])Module["setStatus"]("Downloading data... ("+loaded+"/"+total+")")}else if(!Module.dataFileDownloads){if(Module["setStatus"])Module["setStatus"]("Downloading data...")}};xhr.onerror=function(event){throw new Error("NetworkError for: "+packageName)};xhr.onload=function(event){if(xhr.status==200||xhr.status==304||xhr.status==206||xhr.status==0&&xhr.response){var packageData=xhr.response;callback(packageData)}else{throw new Error(xhr.statusText+" : "+xhr.responseURL)}};xhr.send(null)}function handleError(error){console.error("package error:",error)}var fetchedCallback=null;var fetched=Module["getPreloadedPackage"]?Module["getPreloadedPackage"](REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE):null;if(!fetched)fetchRemotePackage(REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE,(function(data){if(fetchedCallback){fetchedCallback(data);fetchedCallback=null}else{fetched=data}}),handleError);function runWithFS(){function assert(check,msg){if(!check)throw msg+(new Error).stack}Module["FS_createPath"]("/","lib",true,true);Module["FS_createPath"]("/lib","python3.9",true,true);Module["FS_createPath"]("/lib/python3.9","site-packages",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","attr",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","attrs",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","attrs-21.4.0-py3.9.egg-info",true,true);function processPackageData(arrayBuffer){assert(arrayBuffer,"Loading data file failed.");assert(arrayBuffer instanceof ArrayBuffer,"bad input to processPackageData");var byteArray=new Uint8Array(arrayBuffer);var curr;var compressedData={data:null,cachedOffset:111541,cachedIndexes:[-1,-1],cachedChunks:[null,null],offsets:[0,1249,2285,3610,4658,5847,7059,8387,9649,10741,11485,12460,13322,14509,15929,17397,18839,20147,21322,22729,23905,25165,26451,27850,28966,30052,30994,32111,33298,34556,35664,36511,37500,38555,39887,41325,42689,44074,45566,47020,48445,49662,50675,51790,52902,54012,54931,55915,57098,58192,59391,60378,61548,62580,63149,63742,64725,66038,67336,68421,69581,70428,71648,72920,74276,75404,76727,77930,79041,80331,81435,82588,83806,84956,86163,87443,88644,89860,91015,91936,92958,94162,95442,96898,97440,97985,98544,99352,100517,101523,102642,103543,104621,105724,107261,108760,110298,111274],sizes:[1249,1036,1325,1048,1189,1212,1328,1262,1092,744,975,862,1187,1420,1468,1442,1308,1175,1407,1176,1260,1286,1399,1116,1086,942,1117,1187,1258,1108,847,989,1055,1332,1438,1364,1385,1492,1454,1425,1217,1013,1115,1112,1110,919,984,1183,1094,1199,987,1170,1032,569,593,983,1313,1298,1085,1160,847,1220,1272,1356,1128,1323,1203,1111,1290,1104,1153,1218,1150,1207,1280,1201,1216,1155,921,1022,1204,1280,1456,542,545,559,808,1165,1006,1119,901,1078,1103,1537,1499,1538,976,267],successes:[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]};compressedData["data"]=byteArray;assert(typeof Module.LZ4==="object","LZ4 not present - was your app build with -s LZ4=1 ?");Module.LZ4.loadPackage({metadata:metadata,compressedData:compressedData},true);Module["removeRunDependency"]("datafile_attrs.data")}Module["addRunDependency"]("datafile_attrs.data");if(!Module.preloadResults)Module.preloadResults={};Module.preloadResults[PACKAGE_NAME]={fromCache:false};if(fetched){processPackageData(fetched);fetched=null}else{fetchedCallback=processPackageData}}if(Module["calledRun"]){runWithFS()}else{if(!Module["preRun"])Module["preRun"]=[];Module["preRun"].push(runWithFS)}};loadPackage({files:[{filename:"/lib/python3.9/site-packages/attr/__init__.py",start:0,end:1667,audio:0},{filename:"/lib/python3.9/site-packages/attr/_cmp.py",start:1667,end:5832,audio:0},{filename:"/lib/python3.9/site-packages/attr/_compat.py",start:5832,end:14228,audio:0},{filename:"/lib/python3.9/site-packages/attr/_config.py",start:14228,end:15120,audio:0},{filename:"/lib/python3.9/site-packages/attr/_funcs.py",start:15120,end:29873,audio:0},{filename:"/lib/python3.9/site-packages/attr/_make.py",start:29873,end:132609,audio:0},{filename:"/lib/python3.9/site-packages/attr/_next_gen.py",start:132609,end:138361,audio:0},{filename:"/lib/python3.9/site-packages/attr/_version_info.py",start:138361,end:140555,audio:0},{filename:"/lib/python3.9/site-packages/attr/converters.py",start:140555,end:144633,audio:0},{filename:"/lib/python3.9/site-packages/attr/exceptions.py",start:144633,end:146614,audio:0},{filename:"/lib/python3.9/site-packages/attr/filters.py",start:146614,end:147738,audio:0},{filename:"/lib/python3.9/site-packages/attr/setters.py",start:147738,end:149204,audio:0},{filename:"/lib/python3.9/site-packages/attr/validators.py",start:149204,end:165170,audio:0},{filename:"/lib/python3.9/site-packages/attr/__init__.pyi",start:165170,end:180270,audio:0},{filename:"/lib/python3.9/site-packages/attr/_cmp.pyi",start:180270,end:180587,audio:0},{filename:"/lib/python3.9/site-packages/attr/_version_info.pyi",start:180587,end:180796,audio:0},{filename:"/lib/python3.9/site-packages/attr/converters.pyi",start:180796,end:181212,audio:0},{filename:"/lib/python3.9/site-packages/attr/exceptions.pyi",start:181212,end:181751,audio:0},{filename:"/lib/python3.9/site-packages/attr/filters.pyi",start:181751,end:181966,audio:0},{filename:"/lib/python3.9/site-packages/attr/py.typed",start:181966,end:181966,audio:0},{filename:"/lib/python3.9/site-packages/attr/setters.pyi",start:181966,end:182539,audio:0},{filename:"/lib/python3.9/site-packages/attr/validators.pyi",start:182539,end:184807,audio:0},{filename:"/lib/python3.9/site-packages/attrs/__init__.py",start:184807,end:185916,audio:0},{filename:"/lib/python3.9/site-packages/attrs/converters.py",start:185916,end:185986,audio:0},{filename:"/lib/python3.9/site-packages/attrs/exceptions.py",start:185986,end:186056,audio:0},{filename:"/lib/python3.9/site-packages/attrs/filters.py",start:186056,end:186123,audio:0},{filename:"/lib/python3.9/site-packages/attrs/setters.py",start:186123,end:186190,audio:0},{filename:"/lib/python3.9/site-packages/attrs/validators.py",start:186190,end:186260,audio:0},{filename:"/lib/python3.9/site-packages/attrs/__init__.pyi",start:186260,end:188242,audio:0},{filename:"/lib/python3.9/site-packages/attrs/py.typed",start:188242,end:188242,audio:0},{filename:"/lib/python3.9/site-packages/attrs-21.4.0-py3.9.egg-info/PKG-INFO",start:188242,end:196284,audio:0},{filename:"/lib/python3.9/site-packages/attrs-21.4.0-py3.9.egg-info/SOURCES.txt",start:196284,end:198562,audio:0},{filename:"/lib/python3.9/site-packages/attrs-21.4.0-py3.9.egg-info/dependency_links.txt",start:198562,end:198563,audio:0},{filename:"/lib/python3.9/site-packages/attrs-21.4.0-py3.9.egg-info/not-zip-safe",start:198563,end:198564,audio:0},{filename:"/lib/python3.9/site-packages/attrs-21.4.0-py3.9.egg-info/requires.txt",start:198564,end:199195,audio:0},{filename:"/lib/python3.9/site-packages/attrs-21.4.0-py3.9.egg-info/top_level.txt",start:199195,end:199206,audio:0}],remote_package_size:115637,package_uuid:"f3fa16f6-57e0-45c5-8792-b9addacd1d36"})})();
\ No newline at end of file
diff --git a/spaces/qwerrsc/vits-uma-genshin-honkai/attentions.py b/spaces/qwerrsc/vits-uma-genshin-honkai/attentions.py
deleted file mode 100644
index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000
--- a/spaces/qwerrsc/vits-uma-genshin-honkai/attentions.py
+++ /dev/null
@@ -1,300 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/r3gm/AICoverGen/src/rmvpe.py b/spaces/r3gm/AICoverGen/src/rmvpe.py
deleted file mode 100644
index 8d0d57297d4301e43a4fdcda216ae39c5e3b83b4..0000000000000000000000000000000000000000
--- a/spaces/r3gm/AICoverGen/src/rmvpe.py
+++ /dev/null
@@ -1,432 +0,0 @@
-import torch, numpy as np
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-
-class BiGRU(nn.Module):
- def __init__(self, input_features, hidden_features, num_layers):
- super(BiGRU, self).__init__()
- self.gru = nn.GRU(
- input_features,
- hidden_features,
- num_layers=num_layers,
- batch_first=True,
- bidirectional=True,
- )
-
- def forward(self, x):
- return self.gru(x)[0]
-
-
-class ConvBlockRes(nn.Module):
- def __init__(self, in_channels, out_channels, momentum=0.01):
- super(ConvBlockRes, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- in_channels=in_channels,
- out_channels=out_channels,
- kernel_size=(3, 3),
- stride=(1, 1),
- padding=(1, 1),
- bias=False,
- ),
- nn.BatchNorm2d(out_channels, momentum=momentum),
- nn.ReLU(),
- nn.Conv2d(
- in_channels=out_channels,
- out_channels=out_channels,
- kernel_size=(3, 3),
- stride=(1, 1),
- padding=(1, 1),
- bias=False,
- ),
- nn.BatchNorm2d(out_channels, momentum=momentum),
- nn.ReLU(),
- )
- if in_channels != out_channels:
- self.shortcut = nn.Conv2d(in_channels, out_channels, (1, 1))
- self.is_shortcut = True
- else:
- self.is_shortcut = False
-
- def forward(self, x):
- if self.is_shortcut:
- return self.conv(x) + self.shortcut(x)
- else:
- return self.conv(x) + x
-
-
-class Encoder(nn.Module):
- def __init__(
- self,
- in_channels,
- in_size,
- n_encoders,
- kernel_size,
- n_blocks,
- out_channels=16,
- momentum=0.01,
- ):
- super(Encoder, self).__init__()
- self.n_encoders = n_encoders
- self.bn = nn.BatchNorm2d(in_channels, momentum=momentum)
- self.layers = nn.ModuleList()
- self.latent_channels = []
- for i in range(self.n_encoders):
- self.layers.append(
- ResEncoderBlock(
- in_channels, out_channels, kernel_size, n_blocks, momentum=momentum
- )
- )
- self.latent_channels.append([out_channels, in_size])
- in_channels = out_channels
- out_channels *= 2
- in_size //= 2
- self.out_size = in_size
- self.out_channel = out_channels
-
- def forward(self, x):
- concat_tensors = []
- x = self.bn(x)
- for i in range(self.n_encoders):
- _, x = self.layers[i](x)
- concat_tensors.append(_)
- return x, concat_tensors
-
-
-class ResEncoderBlock(nn.Module):
- def __init__(
- self, in_channels, out_channels, kernel_size, n_blocks=1, momentum=0.01
- ):
- super(ResEncoderBlock, self).__init__()
- self.n_blocks = n_blocks
- self.conv = nn.ModuleList()
- self.conv.append(ConvBlockRes(in_channels, out_channels, momentum))
- for i in range(n_blocks - 1):
- self.conv.append(ConvBlockRes(out_channels, out_channels, momentum))
- self.kernel_size = kernel_size
- if self.kernel_size is not None:
- self.pool = nn.AvgPool2d(kernel_size=kernel_size)
-
- def forward(self, x):
- for i in range(self.n_blocks):
- x = self.conv[i](x)
- if self.kernel_size is not None:
- return x, self.pool(x)
- else:
- return x
-
-
-class Intermediate(nn.Module): #
- def __init__(self, in_channels, out_channels, n_inters, n_blocks, momentum=0.01):
- super(Intermediate, self).__init__()
- self.n_inters = n_inters
- self.layers = nn.ModuleList()
- self.layers.append(
- ResEncoderBlock(in_channels, out_channels, None, n_blocks, momentum)
- )
- for i in range(self.n_inters - 1):
- self.layers.append(
- ResEncoderBlock(out_channels, out_channels, None, n_blocks, momentum)
- )
-
- def forward(self, x):
- for i in range(self.n_inters):
- x = self.layers[i](x)
- return x
-
-
-class ResDecoderBlock(nn.Module):
- def __init__(self, in_channels, out_channels, stride, n_blocks=1, momentum=0.01):
- super(ResDecoderBlock, self).__init__()
- out_padding = (0, 1) if stride == (1, 2) else (1, 1)
- self.n_blocks = n_blocks
- self.conv1 = nn.Sequential(
- nn.ConvTranspose2d(
- in_channels=in_channels,
- out_channels=out_channels,
- kernel_size=(3, 3),
- stride=stride,
- padding=(1, 1),
- output_padding=out_padding,
- bias=False,
- ),
- nn.BatchNorm2d(out_channels, momentum=momentum),
- nn.ReLU(),
- )
- self.conv2 = nn.ModuleList()
- self.conv2.append(ConvBlockRes(out_channels * 2, out_channels, momentum))
- for i in range(n_blocks - 1):
- self.conv2.append(ConvBlockRes(out_channels, out_channels, momentum))
-
- def forward(self, x, concat_tensor):
- x = self.conv1(x)
- x = torch.cat((x, concat_tensor), dim=1)
- for i in range(self.n_blocks):
- x = self.conv2[i](x)
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, in_channels, n_decoders, stride, n_blocks, momentum=0.01):
- super(Decoder, self).__init__()
- self.layers = nn.ModuleList()
- self.n_decoders = n_decoders
- for i in range(self.n_decoders):
- out_channels = in_channels // 2
- self.layers.append(
- ResDecoderBlock(in_channels, out_channels, stride, n_blocks, momentum)
- )
- in_channels = out_channels
-
- def forward(self, x, concat_tensors):
- for i in range(self.n_decoders):
- x = self.layers[i](x, concat_tensors[-1 - i])
- return x
-
-
-class DeepUnet(nn.Module):
- def __init__(
- self,
- kernel_size,
- n_blocks,
- en_de_layers=5,
- inter_layers=4,
- in_channels=1,
- en_out_channels=16,
- ):
- super(DeepUnet, self).__init__()
- self.encoder = Encoder(
- in_channels, 128, en_de_layers, kernel_size, n_blocks, en_out_channels
- )
- self.intermediate = Intermediate(
- self.encoder.out_channel // 2,
- self.encoder.out_channel,
- inter_layers,
- n_blocks,
- )
- self.decoder = Decoder(
- self.encoder.out_channel, en_de_layers, kernel_size, n_blocks
- )
-
- def forward(self, x):
- x, concat_tensors = self.encoder(x)
- x = self.intermediate(x)
- x = self.decoder(x, concat_tensors)
- return x
-
-
-class E2E(nn.Module):
- def __init__(
- self,
- n_blocks,
- n_gru,
- kernel_size,
- en_de_layers=5,
- inter_layers=4,
- in_channels=1,
- en_out_channels=16,
- ):
- super(E2E, self).__init__()
- self.unet = DeepUnet(
- kernel_size,
- n_blocks,
- en_de_layers,
- inter_layers,
- in_channels,
- en_out_channels,
- )
- self.cnn = nn.Conv2d(en_out_channels, 3, (3, 3), padding=(1, 1))
- if n_gru:
- self.fc = nn.Sequential(
- BiGRU(3 * 128, 256, n_gru),
- nn.Linear(512, 360),
- nn.Dropout(0.25),
- nn.Sigmoid(),
- )
- else:
- self.fc = nn.Sequential(
- nn.Linear(3 * nn.N_MELS, nn.N_CLASS), nn.Dropout(0.25), nn.Sigmoid()
- )
-
- def forward(self, mel):
- mel = mel.transpose(-1, -2).unsqueeze(1)
- x = self.cnn(self.unet(mel)).transpose(1, 2).flatten(-2)
- x = self.fc(x)
- return x
-
-
-from librosa.filters import mel
-
-
-class MelSpectrogram(torch.nn.Module):
- def __init__(
- self,
- is_half,
- n_mel_channels,
- sampling_rate,
- win_length,
- hop_length,
- n_fft=None,
- mel_fmin=0,
- mel_fmax=None,
- clamp=1e-5,
- ):
- super().__init__()
- n_fft = win_length if n_fft is None else n_fft
- self.hann_window = {}
- mel_basis = mel(
- sr=sampling_rate,
- n_fft=n_fft,
- n_mels=n_mel_channels,
- fmin=mel_fmin,
- fmax=mel_fmax,
- htk=True,
- )
- mel_basis = torch.from_numpy(mel_basis).float()
- self.register_buffer("mel_basis", mel_basis)
- self.n_fft = win_length if n_fft is None else n_fft
- self.hop_length = hop_length
- self.win_length = win_length
- self.sampling_rate = sampling_rate
- self.n_mel_channels = n_mel_channels
- self.clamp = clamp
- self.is_half = is_half
-
- def forward(self, audio, keyshift=0, speed=1, center=True):
- factor = 2 ** (keyshift / 12)
- n_fft_new = int(np.round(self.n_fft * factor))
- win_length_new = int(np.round(self.win_length * factor))
- hop_length_new = int(np.round(self.hop_length * speed))
- keyshift_key = str(keyshift) + "_" + str(audio.device)
- if keyshift_key not in self.hann_window:
- self.hann_window[keyshift_key] = torch.hann_window(win_length_new).to(
- audio.device
- )
- fft = torch.stft(
- audio,
- n_fft=n_fft_new,
- hop_length=hop_length_new,
- win_length=win_length_new,
- window=self.hann_window[keyshift_key],
- center=center,
- return_complex=True,
- )
- magnitude = torch.sqrt(fft.real.pow(2) + fft.imag.pow(2))
- if keyshift != 0:
- size = self.n_fft // 2 + 1
- resize = magnitude.size(1)
- if resize < size:
- magnitude = F.pad(magnitude, (0, 0, 0, size - resize))
- magnitude = magnitude[:, :size, :] * self.win_length / win_length_new
- mel_output = torch.matmul(self.mel_basis, magnitude)
- if self.is_half == True:
- mel_output = mel_output.half()
- log_mel_spec = torch.log(torch.clamp(mel_output, min=self.clamp))
- return log_mel_spec
-
-
-class RMVPE:
- def __init__(self, model_path, is_half, device=None):
- self.resample_kernel = {}
- model = E2E(4, 1, (2, 2))
- ckpt = torch.load(model_path, map_location="cpu")
- model.load_state_dict(ckpt)
- model.eval()
- if is_half == True:
- model = model.half()
- self.model = model
- self.resample_kernel = {}
- self.is_half = is_half
- if device is None:
- device = "cuda" if torch.cuda.is_available() else "cpu"
- self.device = device
- self.mel_extractor = MelSpectrogram(
- is_half, 128, 16000, 1024, 160, None, 30, 8000
- ).to(device)
- self.model = self.model.to(device)
- cents_mapping = 20 * np.arange(360) + 1997.3794084376191
- self.cents_mapping = np.pad(cents_mapping, (4, 4)) # 368
-
- def mel2hidden(self, mel):
- with torch.no_grad():
- n_frames = mel.shape[-1]
- mel = F.pad(
- mel, (0, 32 * ((n_frames - 1) // 32 + 1) - n_frames), mode="reflect"
- )
- hidden = self.model(mel)
- return hidden[:, :n_frames]
-
- def decode(self, hidden, thred=0.03):
- cents_pred = self.to_local_average_cents(hidden, thred=thred)
- f0 = 10 * (2 ** (cents_pred / 1200))
- f0[f0 == 10] = 0
- # f0 = np.array([10 * (2 ** (cent_pred / 1200)) if cent_pred else 0 for cent_pred in cents_pred])
- return f0
-
- def infer_from_audio(self, audio, thred=0.03):
- audio = torch.from_numpy(audio).float().to(self.device).unsqueeze(0)
- # torch.cuda.synchronize()
- # t0=ttime()
- mel = self.mel_extractor(audio, center=True)
- # torch.cuda.synchronize()
- # t1=ttime()
- hidden = self.mel2hidden(mel)
- # torch.cuda.synchronize()
- # t2=ttime()
- hidden = hidden.squeeze(0).cpu().numpy()
- if self.is_half == True:
- hidden = hidden.astype("float32")
- f0 = self.decode(hidden, thred=thred)
- # torch.cuda.synchronize()
- # t3=ttime()
- # print("hmvpe:%s\t%s\t%s\t%s"%(t1-t0,t2-t1,t3-t2,t3-t0))
- return f0
-
- def to_local_average_cents(self, salience, thred=0.05):
- # t0 = ttime()
- center = np.argmax(salience, axis=1) # frame length#index
- salience = np.pad(salience, ((0, 0), (4, 4))) # frame length,368
- # t1 = ttime()
- center += 4
- todo_salience = []
- todo_cents_mapping = []
- starts = center - 4
- ends = center + 5
- for idx in range(salience.shape[0]):
- todo_salience.append(salience[:, starts[idx] : ends[idx]][idx])
- todo_cents_mapping.append(self.cents_mapping[starts[idx] : ends[idx]])
- # t2 = ttime()
- todo_salience = np.array(todo_salience) # frame length,9
- todo_cents_mapping = np.array(todo_cents_mapping) # frame length,9
- product_sum = np.sum(todo_salience * todo_cents_mapping, 1)
- weight_sum = np.sum(todo_salience, 1) # frame length
- devided = product_sum / weight_sum # frame length
- # t3 = ttime()
- maxx = np.max(salience, axis=1) # frame length
- devided[maxx <= thred] = 0
- # t4 = ttime()
- # print("decode:%s\t%s\t%s\t%s" % (t1 - t0, t2 - t1, t3 - t2, t4 - t3))
- return devided
-
-
-# if __name__ == '__main__':
-# audio, sampling_rate = sf.read("Quotations~1.wav") ### edit
-# if len(audio.shape) > 1:
-# audio = librosa.to_mono(audio.transpose(1, 0))
-# audio_bak = audio.copy()
-# if sampling_rate != 16000:
-# audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
-# model_path = "/bili-coeus/jupyter/jupyterhub-liujing04/vits_ch/test-RMVPE/weights/rmvpe_llc_half.pt"
-# thred = 0.03 # 0.01
-# device = 'cuda' if torch.cuda.is_available() else 'cpu'
-# rmvpe = RMVPE(model_path,is_half=False, device=device)
-# t0=ttime()
-# f0 = rmvpe.infer_from_audio(audio, thred=thred)
-# f0 = rmvpe.infer_from_audio(audio, thred=thred)
-# f0 = rmvpe.infer_from_audio(audio, thred=thred)
-# f0 = rmvpe.infer_from_audio(audio, thred=thred)
-# f0 = rmvpe.infer_from_audio(audio, thred=thred)
-# t1=ttime()
-# print(f0.shape,t1-t0)
diff --git a/spaces/r3gm/RVC_HF/lib/infer_pack/commons.py b/spaces/r3gm/RVC_HF/lib/infer_pack/commons.py
deleted file mode 100644
index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000
--- a/spaces/r3gm/RVC_HF/lib/infer_pack/commons.py
+++ /dev/null
@@ -1,166 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size * dilation - dilation) / 2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += (
- 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q)
- )
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def slice_segments2(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / (
- num_timescales - 1
- )
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment
- )
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2, 3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1.0 / norm_type)
- return total_norm
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/CandyDoll Elizabeta S Set 010jpg.md b/spaces/raedeXanto/academic-chatgpt-beta/CandyDoll Elizabeta S Set 010jpg.md
deleted file mode 100644
index ea8edda4b318b3bb95cc140dc500ca09ccda44d0..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/CandyDoll Elizabeta S Set 010jpg.md
+++ /dev/null
@@ -1,99 +0,0 @@
-
-
CandyDoll Elizabeta S Set 010jpg: A Review of the Photo Collection
-
If you are a fan of young European models, you may have heard of CandyDoll, a Japanese publisher and brand that produces photo books and videos of them. One of their popular models is Elizabeta S, a blonde beauty with blue eyes and a sweet smile. In this article, we will review one of her photo collections, CandyDoll Elizabeta S Set 010jpg, and see if it is worth buying.
CandyDoll is a website that offers monthly subscriptions to access exclusive content of young European models. The website claims that all models are over 18 years old and have consented to participate in the photo shoots. The website also states that all content is legal and complies with Japanese laws.
-
Elizabeta S is one of the models featured on CandyDoll. She is from Russia and has been modeling since she was 14 years old. She has appeared in several photo books and videos for CandyDoll, as well as other brands such as Silver Stars and Fashion Land. She is known for her cute and innocent look, as well as her slender figure and long legs.
-
CandyDoll Elizabeta S Set 010jpg is one of her photo collections that was released in November 2021. It contains 100 photos of Elizabeta S in various outfits and poses. The theme of the collection is leotard and swimsuit, which showcase her curves and charm. The style of the collection is playful and cheerful, with bright colors and fun props.
-
The main features and benefits of buying this photo collection are:
-
-
You can enjoy high-quality photos of Elizabeta S in different scenarios.
-
You can admire her beauty and personality in every photo.
-
You can support her modeling career and help her grow as an artist.
-
You can add this collection to your personal library or share it with your friends.
-
-
Content and Quality
-
The photo collection consists of 100 photos in JPG format. The size of each photo is about 3 MB, which means they have high resolution and clarity. The dimensions of each photo are about 1800 x 1200 pixels, which means they can fit most screens and devices.
-
CandyDoll Elizabeta S leotard
-CandyDoll Elizabeta S swimsuit
-CandyDoll Elizabeta S dress
-CandyDoll Elizabeta S model
-CandyDoll Elizabeta S young
-CandyDoll Elizabeta S teen
-CandyDoll Elizabeta S girl
-CandyDoll Elizabeta S pretty
-CandyDoll Elizabeta S whip
-CandyDoll Elizabeta S photo collection
-CandyDoll Elizabeta S high-quality images
-CandyDoll Elizabeta S Japanese company
-CandyDoll Elizabeta S leotardmodel
-CandyDoll Elizabeta S swimsuitmodel
-CandyDoll Elizabeta S youngmodel
-CandyDoll Elizabeta S girlmodel
-CandyDoll Elizabeta S teenmodel
-CandyDoll Elizabeta S prettymodel
-CandyDoll Elizabeta S candydolltv
-CandyDoll Elizabeta S teenage
-CandyDoll Elizabeta S petite
-CandyDoll Elizabeta S tween
-Historidecade candydoll elizabeta s
-Twitter candydoll elizabeta s
-Soundcloud candydoll elizabeta s
-Feipoicircgreas1985 candydoll elizabeta s
-Retweets candydoll elizabeta s
-Likes candydoll elizabeta s
-Bookmarks candydoll elizabeta s
-Karl3.2.3 candydoll elizabeta s
-Hakan_marmaris candydoll elizabeta s
-Guillermo Trébol candydoll elizabeta s
-FigasLeonardo candydoll elizabeta s
-Süper candydoll elizabeta s
-Cute candydoll elizabeta s
-Nov 2 2021 candydoll elizabeta s
-Jan 5 2022 candydoll elizabeta s
-Jul 7 2022 candydoll elizabeta s
-Jul 18 2022 candydoll elizabeta s
-Nov 6 2022 candydoll elizabeta s
-
The lighting, color, contrast, and sharpness of the photos are excellent. The photos are well-lit and have vibrant colors that match the mood of each scene. The contrast between Elizabeta S's skin tone and her outfits is striking and appealing. The sharpness of the photos is crisp and detailed, which allows you to see every feature of her face and body.
-
The posing, expression, and outfit of Elizabeta S in the photos are stunning. She poses confidently and naturally in front of the camera, showing off her flexibility and grace. She smiles warmly and innocently in most photos, but also makes some seductive and teasing expressions in others. She wears various leotards and swimsuits that accentuate her curves and charm. Some examples are:
-
-
-
-
-
The background, setting, and props of the photos are creative and fun. The photos were taken in various locations such as a studio, a park, a beach, a pool, etc. The backgrounds are colorful and lively, creating a contrast with Elizabeta S's outfits. The settings are realistic and appropriate for each scene. The props are cute and amusing, such as balloons, bubbles, flowers, etc.
-
Price and Availability
-
The price of the photo collection is $29.99 USD (or equivalent currency). You can purchase it online through CandyDoll's website using PayPal or credit card. You will receive an email with a download link after completing your payment. You can download the photo collection as a ZIP file to your computer or device.
-
There is currently no discount or bundle offer for this photo collection. However, you can save money by subscribing to CandyDoll's monthly plan for $49.99 USD (or equivalent currency). This will give you access to all content on their website, including new releases every month.
-
There is also no refund or exchange policy for this photo collection. Once you purchase it, you cannot cancel or return it. You also cannot share or resell it to others without violating CandyDoll's terms of service.
-
Pros and Cons
-
Before you decide whether to buy this photo collection or not, you should weigh its pros and cons carefully. Here are some points to consider:
-
Pros
-
-
The photo collection has high-quality photos that capture Elizabeta S's beauty and personality.
-
The photo collection has diverse content that showcases Elizabeta S's versatility as a model.
-
The photo collection has a reasonable price that reflects its value as a product.
-
The photo collection has an easy purchase process that ensures your convenience as a customer.
-
-
Cons
-
-
The photo collection may not be suitable for everyone's taste or preference.
-
The photo collection may not be legal or ethical in some countries or regions.
-
The photo collection may not be safe or secure to download or store on your computer or device.
-
The photo collection may not have any customer service or support if you encounter any problems or issues.
-
-
Conclusion
-
In conclusion, CandyDoll Elizabeta S Set 010jpg is a photo collection that features Elizabeta S in various leotards and swimsuits. It has high-quality photos that display her beauty and personality in different scenarios. It has diverse content that shows her versatility as a model. It has a reasonable price that reflects its value as a product. It has an easy purchase process that ensures your convenience as a customer. However, it also has some drawbacks that you should be aware of before buying it. It may not be suitable for everyone's taste or preference. It may not be legal or ethical in some countries or regions. It may not be safe or secure to download or store on your computer encounter any problems or issues. Therefore, you should carefully consider the pros and cons of this photo collection before making your final decision. If you are a fan of Elizabeta S and CandyDoll, you may find this photo collection worth buying. If you are not, you may want to look for other options that suit your needs and expectations better. We hope this review has been helpful and informative for you. If you have any questions or opinions about this photo collection, please feel free to leave a comment below. We would love to hear from you and see what you think.
FAQs
-
Q: How old is Elizabeta S?
-
A: According to CandyDoll's website, Elizabeta S is 18 years old as of 2021. However, some sources claim that she is older or younger than that. We cannot verify her exact age or birthday.
-
Q: Where can I find more photos or videos of Elizabeta S?
-
A: You can find more photos or videos of Elizabeta S on CandyDoll's website, as well as other websites such as Silver Stars and Fashion Land. You can also follow her on social media platforms such as Twitter and Instagram.
-
Q: Is CandyDoll legal and ethical?
-
A: CandyDoll claims that all its content is legal and ethical, and that all models are over 18 years old and have consented to participate in the photo shoots. However, some countries or regions may have different laws or standards regarding the production and distribution of such content. Therefore, you should check your local regulations before buying or viewing any CandyDoll products.
-
Q: How can I contact CandyDoll or Elizabeta S?
-
A: You can contact CandyDoll through their email address: info@candydoll.tv. You can also contact Elizabeta S through her social media accounts or her fan club.
-
Q: What are some other similar products or brands to CandyDoll?
-
A: Some other similar products or brands to CandyDoll are Silver Stars, Fashion Land, New Star, Teen Models Club, etc. They also feature young European models in various outfits and poses.
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Corel PaintShop Pro 2020 Ultimate 22.2.0.8 Keygen Unlock the Ultimate Features of the Professional Photo Software.md b/spaces/raedeXanto/academic-chatgpt-beta/Corel PaintShop Pro 2020 Ultimate 22.2.0.8 Keygen Unlock the Ultimate Features of the Professional Photo Software.md
deleted file mode 100644
index 7708a8ba58df4fd37867e44baa100954e6f5946c..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Corel PaintShop Pro 2020 Ultimate 22.2.0.8 Keygen Unlock the Ultimate Features of the Professional Photo Software.md
+++ /dev/null
@@ -1,116 +0,0 @@
-
-
Corel PaintShop Pro 2020 Ultimate 22.2.0.8 Keygen: A Comprehensive Review
-
If you are looking for a powerful and versatile photo editing and graphic design software, you might have heard of Corel PaintShop Pro 2020 Ultimate. This software is one of the best alternatives to Adobe Photoshop, offering a wide range of features and tools to help you create stunning images and designs. But what is Corel PaintShop Pro 2020 Ultimate exactly, and how can you get it for free with a keygen? In this article, we will answer these questions and more, giving you a comprehensive review of this amazing software.
Corel PaintShop Pro 2020 Ultimate is the latest version of Corel's flagship photo editing and graphic design software. It is designed for both beginners and professionals, offering a user-friendly interface and a rich set of features to suit any creative project. Whether you want to enhance your photos, create stunning graphics, or design logos, flyers, posters, or web pages, Corel PaintShop Pro 2020 Ultimate can help you achieve your goals.
-
What is a keygen and why do you need it?
-
A keygen is a software that generates serial numbers or activation codes for other software. You need a keygen to activate Corel PaintShop Pro 2020 Ultimate because it is a paid software that requires a valid license to use. However, with a keygen, you can bypass the license verification process and use the software for free without any limitations or restrictions.
-
What are the benefits of using Corel PaintShop Pro 2020 Ultimate?
-
There are many benefits of using Corel PaintShop Pro 2020 Ultimate, such as:
-
-
You can edit and enhance your photos with advanced tools such as AI-powered noise removal, HDR effects, lens correction, color grading, and more.
-
You can create stunning graphics with tools such as text and typography, vector drawing and editing, brushes and gradients, and more.
-
You can access bonus software and content that are included in the Ultimate edition, such as GRFX Studio, Parallels Toolbox, PhotoMirage Express, Painter Essentials 6, and AfterShot 3.
-
You can save money by using a keygen to activate the software for free instead of paying for a subscription or a one-time purchase.
-
-
Features of Corel PaintShop Pro 2020 Ultimate
-
Photo editing tools
-
Corel PaintShop Pro 2020 Ultimate offers a comprehensive set of photo editing tools that allow you to edit your photos in any way you want. Some of the photo editing tools are:
-
AI-powered tools
-
Corel PaintShop Pro 2020 Ultimate uses artificial intelligence to help you improve your photos with ease. For example, you can use the AI Upsampling tool to enlarge your photos without losing quality, the AI Denoise tool to remove noise from your photos without blurring details, the AI Artifact Removal tool to remove compression artifacts from JPEG images without affecting colors or sharpness, and the AI Style Transfer tool to apply artistic styles to your photos with one click.
-
Creative presets and filters
-
Corel PaintShop Pro 2020 Ultimate also provides you with creative presets and filters that let you add various effects to your photos with one click. For example, you can use the Instant Effects palette to apply different categories of effects such as artistic, film styles, landscape, portrait, retro, traditional, and more. You can also use the Pic-to-Painting presets to transform your photos into paintings with different styles such as impressionist, modernist, illustration, watercolor, colored pencil, pastel sketch, oil painting, and more.
-
Layer and mask support
-
Corel PaintShop Pro 2020 Ultimate also supports layers and masks that enable you to work with multiple images or elements on separate layers. You can use layers to combine images or elements in different ways such as blending modes, opacity levels, grouping, and more. You can also use masks to hide or reveal parts of an image or element on a layer. You can use different types of masks such as adjustment masks, clipping masks, gradient masks, and more.
-
Graphic design tools
-
Besides photo editing tools, Corel PaintShop Pro 2020 Ultimate also offers graphic design tools that allow you to create stunning graphics for various purposes. Some of the graphic design tools are:
-
How to activate Corel PaintShop Pro 2020 Ultimate with keygen
-Corel PaintShop Pro 2020 Ultimate 22.2.0.8 crack download
-Corel PaintShop Pro 2020 Ultimate serial number generator
-Corel PaintShop Pro 2020 Ultimate full version free download
-Corel PaintShop Pro 2020 Ultimate patch + keygen
-Corel PaintShop Pro 2020 Ultimate license key activation
-Corel PaintShop Pro 2020 Ultimate torrent link
-Corel PaintShop Pro 2020 Ultimate review and features
-Corel PaintShop Pro 2020 Ultimate system requirements and compatibility
-Corel PaintShop Pro 2020 Ultimate installation guide and tutorial
-Corel PaintShop Pro 2020 Ultimate best price and discount
-Corel PaintShop Pro 2020 Ultimate alternative and comparison
-Corel PaintShop Pro 2020 Ultimate tips and tricks
-Corel PaintShop Pro 2020 Ultimate support and customer service
-Corel PaintShop Pro 2020 Ultimate update and upgrade
-Corel PaintShop Pro 2020 Ultimate bonus content and plugins
-Corel PaintShop Pro 2020 Ultimate user manual and documentation
-Corel PaintShop Pro 2020 Ultimate online course and training
-Corel PaintShop Pro 2020 Ultimate video editing and photo editing software
-Corel PaintShop Pro 2020 Ultimate vs Adobe Photoshop CC 2021
-Corel PaintShop Pro 2020 Ultimate vs Affinity Photo
-Corel PaintShop Pro 2020 Ultimate vs GIMP
-Corel PaintShop Pro 2020 Ultimate vs Lightroom Classic
-Corel PaintShop Pro 2020 Ultimate vs Luminar AI
-Corel PaintShop Pro 2020 Ultimate vs ON1 Photo RAW
-Corel PaintShop Pro 2020 Ultimate vs PhotoDirector Ultra
-Corel PaintShop Pro 2020 Ultimate vs PhotoScape X
-Corel PaintShop Pro 2020 Ultimate vs Pixelmator Pro
-Corel PaintShop Pro 2020 Ultimate vs Skylum Aurora HDR
-Corel PaintShop Pro 2020 Ultimate vs Snapseed
-Benefits of using Corel PaintShop Pro 2020 Ultimate keygen
-Risks of using Corel PaintShop Pro 2020 Ultimate keygen
-How to avoid malware and viruses from Corel PaintShop Pro 2020 Ultimate keygen
-How to fix errors and issues from Corel PaintShop Pro 2020 Ultimate keygen
-How to uninstall and remove Corel PaintShop Pro 2020 Ultimate keygen
-How to backup and restore Corel PaintShop Pro 2020 Ultimate keygen
-How to transfer and migrate Corel PaintShop Pro 2020 Ultimate keygen to another device
-How to share and collaborate with Corel PaintShop Pro 2020 Ultimate keygen
-How to optimize and improve performance of Corel PaintShop Pro 2020 Ultimate keygen
-How to customize and personalize Corel PaintShop Pro 2020 Ultimate keygen settings and preferences
-
Text and typography tools
-
Corel PaintShop Pro 2020 Ultimate allows you to add text to your images or designs with various options such as fonts, sizes, styles, colors, alignments, spacing, and more. You can also use typography tools such as kerning, leading, tracking, and more. You can also create text effects such as drop shadows, glows, outlines, and more.
-
Vector drawing and editing tools
-
Corel PaintShop Pro 2020 Ultimate also lets you draw and edit vector graphics with precision and ease. You can use vector drawing tools such as pen, shape, curve, and more. You can also use vector editing tools such as node editing, transforming, aligning, distributing, and more. You can also convert raster images into vector graphics with the Smart Carver tool.
-
Brushes and gradients tools
-
Corel PaintShop Pro 2020 Ultimate also gives you access to brushes and gradients tools that allow you to add color and texture to your images or designs. You can use brushes tools such as paintbrush, airbrush, eraser, clone brush, and more. You can also use gradients tools such as linear gradient, radial gradient, conical gradient, and more. You can also create custom brushes and gradients with your own images or colors.
-
Bonus software and content
- and content are:
-
GRFX Studio
-
GRFX Studio is a powerful photo editing software that allows you to apply thousands of photo effects and filters to your images with one click. You can also use GRFX Studio to adjust your photos with tools such as crop, rotate, resize, sharpen, and more. You can also use GRFX Studio to create collages, frames, borders, and stickers with your photos.
-
Parallels Toolbox
-
Parallels Toolbox is a handy software that provides you with over 30 tools to optimize your PC performance and productivity. You can use Parallels Toolbox to clean your disk, free up memory, uninstall apps, download videos, record screen, take screenshots, and more.
-
PhotoMirage Express
-
PhotoMirage Express is a fun software that allows you to animate your photos with ease. You can use PhotoMirage Express to create photo animations by adding motion arrows and anchor points to your photos. You can also use PhotoMirage Express to adjust the speed, direction, and smoothness of your animations. You can also use PhotoMirage Express to export and share your animations as GIFs, videos, or social media posts.
-
Painter Essentials 6
-
Painter Essentials 6 is a beginner-friendly software that allows you to create digital paintings from your photos or blank canvas. You can use Painter Essentials 6 to choose from various painting styles such as oil, watercolor, impressionist, sketch, and more. You can also use Painter Essentials 6 to customize your brushes, colors, textures, and more. You can also use Painter Essentials 6 to learn from tutorials and tips from experts.
-
AfterShot 3
-
AfterShot 3 is a fast and easy software that allows you to edit and organize your RAW photos. You can use AfterShot 3 to batch process your RAW photos with tools such as crop, rotate, straighten, white balance, exposure, contrast, and more. You can also use AfterShot 3 to apply presets and filters to your RAW photos with one click. You can also use AfterShot 3 to manage your photo library with tools such as ratings, tags, collections, and more.
-
How to use Corel PaintShop Pro 2020 Ultimate keygen?
-
If you want to use Corel PaintShop Pro 2020 Ultimate for free with a keygen, you need to follow these simple steps:
-
Download and install the software
-
The first step is to download and install the software from the official website or any other trusted source. You can choose between the trial version or the full version. The trial version will expire after 30 days, while the full version will require a license key to activate.
-
Run the keygen and generate a serial number
-
The second step is to run the keygen that you can download from this link or any other reliable source. The keygen is a small program that will generate a serial number for you. You need to copy the serial number and paste it into the software when prompted.
-
Activate the software with the serial number and enjoy!
-
The final step is to activate the software with the serial number that you got from the keygen. You need to enter the serial number into the software and click on activate. The software will verify the serial number and unlock all the features and tools for you. You can now enjoy using Corel PaintShop Pro 2020 Ultimate for free!
-
Conclusion
-
Summary of the main points
-
In conclusion, Corel PaintShop Pro 2020 Ultimate is a powerful and versatile photo editing and graphic design software that offers a wide range of features and tools to help you create stunning images and designs. It is one of the best alternatives to Adobe Photoshop, offering a user-friendly interface and a rich set of features to suit any creative project. You can also get it for free with a keygen that will generate a serial number for you.
-
Call to action
-
If you are interested in trying out Corel PaintShop Pro 2020 Ultimate for yourself, you can download it from this link or any other trusted source. You can also download the keygen from this link or any other reliable source. Follow the steps above to install and activate the software with the keygen and enjoy using it for free!
-
If you liked this article, please share it with your friends and leave a comment below. Thank you for reading!
-
Frequently Asked Questions
-
-
Is Corel PaintShop Pro 2020 Ultimate safe to use?
-
Yes, Corel PaintShop Pro 2020 Ultimate is safe to use as long as you download it from the official website or any other trusted source. However, you should be careful when downloading and using a keygen, as some keygens may contain viruses or malware that can harm your PC. You should always scan any file that you download with an antivirus program before opening it.
-
Is Corel PaintShop Pro 2020 Ultimate compatible with Windows 10?
-
Yes, Corel PaintShop Pro 2020 Ultimate is compatible with Windows 10 as well as Windows 8.1 and Windows 7 (64-bit editions only). It requires at least 4 GB of RAM, 1 GB of hard disk space, and an Intel or AMD processor with 64-bit support.
-
Can I use Corel PaintShop Pro 2020 Ultimate on Mac?
-
No, Corel PaintShop Pro 2020 Ultimate is not available for Mac users. However, you can use Parallels Desktop or Boot Camp to run Windows on your Mac and then install Corel PaintShop Pro 2020 Ultimate on it.
-
Can I use Corel PaintShop Pro 2020 Ultimate offline?
-
Yes, you can use Corel PaintShop Pro 2020 Ultimate offline once you have activated it with a serial number from a keygen. However, you may need an internet connection for some features such as online help, updates, or bonus software downloads.
-
Can I update Corel PaintShop Pro 2020 Ultimate after using a keygen?
-
No, you should not update Corel PaintShop Pro 2020 Ultimate after using a keygen because it may invalidate your serial number and deactivate your software. You should only update your software if you have purchased a valid license from Corel or an authorized reseller.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/rahul999r/Rahul_Kannada_TTS/tts_infer/example_inference.py b/spaces/rahul999r/Rahul_Kannada_TTS/tts_infer/example_inference.py
deleted file mode 100644
index 676718fff3c6a7120cea91b0cfc95f8872929da7..0000000000000000000000000000000000000000
--- a/spaces/rahul999r/Rahul_Kannada_TTS/tts_infer/example_inference.py
+++ /dev/null
@@ -1,79 +0,0 @@
-''' Example file to test tts_infer after installing it. Refer to section 1.1 in README.md for steps of installation. '''
-
-from tts_infer.tts import TextToMel, MelToWav
-from tts_infer.transliterate import XlitEngine
-from tts_infer.num_to_word_on_sent import normalize_nums
-
-import re
-import numpy as np
-from scipy.io.wavfile import write
-
-from mosestokenizer import *
-from indicnlp.tokenize import sentence_tokenize
-
-INDIC = ["as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te"]
-
-def split_sentences(paragraph, language):
- if language == "en":
- with MosesSentenceSplitter(language) as splitter:
- return splitter([paragraph])
- elif language in INDIC:
- return sentence_tokenize.sentence_split(paragraph, lang=language)
-
-
-device='cpu'
-text_to_mel = TextToMel(glow_model_dir='/path/to/glow_ckp', device=device)
-mel_to_wav = MelToWav(hifi_model_dir='/path/to/hifi_ckp', device=device)
-
-lang='hi' # transliteration from En to Hi
-engine = XlitEngine(lang) # loading translit model globally
-
-def translit(text, lang):
- reg = re.compile(r'[a-zA-Z]')
- words = [engine.translit_word(word, topk=1)[lang][0] if reg.match(word) else word for word in text.split()]
- updated_sent = ' '.join(words)
- return updated_sent
-
-def run_tts(text, lang):
- text = text.replace('।', '.') # only for hindi models
- text_num_to_word = normalize_nums(text, lang) # converting numbers to words in lang
- text_num_to_word_and_transliterated = translit(text_num_to_word, lang) # transliterating english words to lang
- final_text = ' ' + text_num_to_word_and_transliterated
-
- mel = text_to_mel.generate_mel(final_text)
- audio, sr = mel_to_wav.generate_wav(mel)
- write(filename='temp.wav', rate=sr, data=audio) # for saving wav file, if needed
- return (sr, audio)
-
-def run_tts_paragraph(text, lang):
- audio_list = []
- split_sentences_list = split_sentences(text, language='hi')
-
- for sent in split_sentences_list:
- sr, audio = run_tts(sent, lang)
- audio_list.append(audio)
-
- concatenated_audio = np.concatenate([i for i in audio_list])
- write(filename='temp_long.wav', rate=sr, data=concatenated_audio)
- return (sr, concatenated_audio)
-
-if __name__ == "__main__":
- _, audio = run_tts('mera naam neeraj hai', 'hi')
-
- para = '''
- भारत मेरा देश है और मुझे भारतीय होने पर गर्व है। ये विश्व का सातवाँ सबसे बड़ा और विश्व में दूसरा सबसे अधिक जनसंख्या वाला देश है।
- इसे भारत, हिन्दुस्तान और आर्यव्रत के नाम से भी जाना जाता है। ये एक प्रायद्वीप है जो पूरब में बंगाल की खाड़ी,
- पश्चिम में अरेबियन सागर और दक्षिण में भारतीय महासागर जैसे तीन महासगरों से घिरा हुआ है।
- भारत का राष्ट्रीय पशु चीता, राष्ट्रीय पक्षी मोर, राष्ट्रीय फूल कमल, और राष्ट्रीय फल आम है।
- भारत मेरा देश है और मुझे भारतीय होने पर गर्व है। ये विश्व का सातवाँ सबसे बड़ा और विश्व में दूसरा सबसे अधिक जनसंख्या वाला देश है।
- इसे भारत, हिन्दुस्तान और आर्यव्रत के नाम से भी जाना जाता है। ये एक प्रायद्वीप है जो पूरब में बंगाल की खाड़ी,
- पश्चिम में अरेबियन सागर और दक्षिण में भारतीय महासागर जैसे तीन महासगरों से घिरा हुआ है।
- भारत का राष्ट्रीय पशु चीता, राष्ट्रीय पक्षी मोर, राष्ट्रीय फूल कमल, और राष्ट्रीय फल आम है।
- भारत मेरा देश है और मुझे भारतीय होने पर गर्व है। ये विश्व का सातवाँ सबसे बड़ा और विश्व में दूसरा सबसे अधिक जनसंख्या वाला देश है।
- इसे भारत, हिन्दुस्तान और आर्यव्रत के नाम से भी जाना जाता है। ये एक प्रायद्वीप है जो पूरब में बंगाल की खाड़ी,
- पश्चिम में अरेबियन सागर और दक्षिण में भारतीय महासागर जैसे तीन महासगरों से घिरा हुआ है।
- भारत का राष्ट्रीय पशु चीता, राष्ट्रीय पक्षी मोर, राष्ट्रीय फूल कमल, और राष्ट्रीय फल आम है।
- '''
-
- print('Num chars in paragraph: ', len(para))
- _, audio_long = run_tts_paragraph(para, 'hi')
diff --git a/spaces/rajeshradhakrishnan/english-malayalam/static/script.js b/spaces/rajeshradhakrishnan/english-malayalam/static/script.js
deleted file mode 100644
index 9eb3b0c72b77af95f505be70503c8e42794f8470..0000000000000000000000000000000000000000
--- a/spaces/rajeshradhakrishnan/english-malayalam/static/script.js
+++ /dev/null
@@ -1,90 +0,0 @@
-
-const translateText = async (text) => {
- console.log(text)
- const inferResponse = await fetch(`infer_t5?input=${text}`);
- const inferJson = await inferResponse.json();
- console.log(inferJson.output)
- return inferJson.output;
- };
-
-
-function generatePrompterAssistantText(inputString) {
- // Split the input string into an array of sentences
- const sentences = inputString.split('<|endoftext|>');
-
- // Initialize arrays for prompter and assistant text
- let prompterText = [];
- let assistantText = [];
-
- // Loop through each sentence and add it to the prompter or assistant text array
- for (let i = 0; i < sentences.length; i++) {
- // Check if the sentence contains the tag
- if (sentences[i].includes('<|prompter|>')) {
- // Extract the text within the tags and add it to the prompter text array
- const prompterSentence = sentences[i].replace(/<\|prompter\|>/g, '');
- prompterText.push(prompterSentence);
- } else if (sentences[i].includes('<|assistant|>')) {
- const assistantSentence = sentences[i].replace(/<\|assistant\|>/g, '');
- // Add the sentence to the assistant text array
- assistantText.push(assistantSentence);
- }
- }
-
- // Return the prompter and assistant text arrays
- return [prompterText, assistantText];
- }
-
-const submitButton = document.querySelector('#submit')
-const outPutElement = document.querySelector('#output')
-const inputElement = document.querySelector('input')
-const historyElement = document.querySelector('.history')
-const buttonElement = document.querySelector('button')
-
-
-function changeInput(value)
-{
- console.log(value)
- const inputElement = document.querySelector('input')
- inputElement.value = value
-}
-async function getMessage(){
- //console.log("input value "+ inputElement.value)
- const options = {
- method: "POST",
- headers: {
- Authorization: `Bearer ${API_KEY}`,
- "Content-Type": "application/json"
- },
- body: JSON.stringify({
- inputs: "<|prompter|>" + inputElement.value + "<|endoftext|><|assistant|>",
- parameters: {"max_new_tokens": 200, "temperature": 0.9}
- })
- }
- try{
- const response = await fetch("https://api-inference.huggingface.co/models/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5", options);
- const data = await response.json()
- //console.log(data[0].generated_text)
-
- if(inputElement.value && data && data[0].generated_text){
- const [prompterText, assistantText] = generatePrompterAssistantText(data[0].generated_text);
- // const en_text_ml = "English: " + assistantText[0] + " Malayalam:";
- // console.log(en_text_ml)
- //console.log(prompterText)
- //console.log(assistantText)
- outPutElement.textContent = await translateText(assistantText[0]);
- const pElement = document.createElement('p')
- pElement.textContent = inputElement.value
- pElement.addEventListener('click', () => changeInput(pElement.textContent))
- historyElement.append(pElement)
- }
- } catch(error) {
- console.log(error)
- }
-}
-
-submitButton.addEventListener('click', getMessage)
-
-function clearInput(){
- inputElement.value = ''
-}
-buttonElement.addEventListener('click', clearInput)
\ No newline at end of file
diff --git a/spaces/rajistics/h2o_wave_transformers/Dockerfile b/spaces/rajistics/h2o_wave_transformers/Dockerfile
deleted file mode 100644
index d9a2be1c94a67457831a1ca78ca9ba43e5b8289a..0000000000000000000000000000000000000000
--- a/spaces/rajistics/h2o_wave_transformers/Dockerfile
+++ /dev/null
@@ -1,31 +0,0 @@
-# read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker
-# you will also find guides on how best to write your Dockerfile
-
-FROM python:3.9
-
-WORKDIR /code
-
-COPY ./requirements.txt /code/requirements.txt
-
-RUN apt update && apt install -y ffmpeg
-RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
-
-RUN useradd -m -u 1000 user
-
-USER user
-
-ENV HOME=/home/user
-ENV PATH=/home/user/.local/bin:$PATH
-
-WORKDIR $HOME/app
-
-COPY --chown=user . $HOME/app
-
-ENV H2O_WAVE_LISTEN=":7860"
-ENV H2O_WAVE_ADDRESS='http://127.0.0.1:7860'
-ENV H2O_WAVE_DATA_DIR='/home/user/app/data'
-
-RUN mkdir -p $HOME/app/data
-
-
-CMD ["wave", "run", "app", "--no-reload"]
\ No newline at end of file
diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/querystring.d.ts b/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/querystring.d.ts
deleted file mode 100644
index e1185478461f4b15444b7b2ae114c8a6819a992a..0000000000000000000000000000000000000000
--- a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/querystring.d.ts
+++ /dev/null
@@ -1,131 +0,0 @@
-/**
- * The `querystring` module provides utilities for parsing and formatting URL
- * query strings. It can be accessed using:
- *
- * ```js
- * const querystring = require('querystring');
- * ```
- *
- * `querystring` is more performant than `URLSearchParams` but is not a
- * standardized API. Use `URLSearchParams` when performance is not critical
- * or when compatibility with browser code is desirable.
- * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/querystring.js)
- */
-declare module 'querystring' {
- interface StringifyOptions {
- encodeURIComponent?: ((str: string) => string) | undefined;
- }
- interface ParseOptions {
- maxKeys?: number | undefined;
- decodeURIComponent?: ((str: string) => string) | undefined;
- }
- interface ParsedUrlQuery extends NodeJS.Dict {}
- interface ParsedUrlQueryInput extends NodeJS.Dict | ReadonlyArray | ReadonlyArray | null> {}
- /**
- * The `querystring.stringify()` method produces a URL query string from a
- * given `obj` by iterating through the object's "own properties".
- *
- * It serializes the following types of values passed in `obj`:[string](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#String_type) |
- * [number](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Number_type) |
- * [bigint](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt) |
- * [boolean](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Boolean_type) |
- * [string\[\]](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#String_type) |
- * [number\[\]](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Number_type) |
- * [bigint\[\]](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt) |
- * [boolean\[\]](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Boolean_type) The numeric values must be finite. Any other input values will be coerced to
- * empty strings.
- *
- * ```js
- * querystring.stringify({ foo: 'bar', baz: ['qux', 'quux'], corge: '' });
- * // Returns 'foo=bar&baz=qux&baz=quux&corge='
- *
- * querystring.stringify({ foo: 'bar', baz: 'qux' }, ';', ':');
- * // Returns 'foo:bar;baz:qux'
- * ```
- *
- * By default, characters requiring percent-encoding within the query string will
- * be encoded as UTF-8\. If an alternative encoding is required, then an alternative`encodeURIComponent` option will need to be specified:
- *
- * ```js
- * // Assuming gbkEncodeURIComponent function already exists,
- *
- * querystring.stringify({ w: '中文', foo: 'bar' }, null, null,
- * { encodeURIComponent: gbkEncodeURIComponent });
- * ```
- * @since v0.1.25
- * @param obj The object to serialize into a URL query string
- * @param [sep='&'] The substring used to delimit key and value pairs in the query string.
- * @param [eq='='] . The substring used to delimit keys and values in the query string.
- */
- function stringify(obj?: ParsedUrlQueryInput, sep?: string, eq?: string, options?: StringifyOptions): string;
- /**
- * The `querystring.parse()` method parses a URL query string (`str`) into a
- * collection of key and value pairs.
- *
- * For example, the query string `'foo=bar&abc=xyz&abc=123'` is parsed into:
- *
- * ```js
- * {
- * foo: 'bar',
- * abc: ['xyz', '123']
- * }
- * ```
- *
- * The object returned by the `querystring.parse()` method _does not_prototypically inherit from the JavaScript `Object`. This means that typical`Object` methods such as `obj.toString()`,
- * `obj.hasOwnProperty()`, and others
- * are not defined and _will not work_.
- *
- * By default, percent-encoded characters within the query string will be assumed
- * to use UTF-8 encoding. If an alternative character encoding is used, then an
- * alternative `decodeURIComponent` option will need to be specified:
- *
- * ```js
- * // Assuming gbkDecodeURIComponent function already exists...
- *
- * querystring.parse('w=%D6%D0%CE%C4&foo=bar', null, null,
- * { decodeURIComponent: gbkDecodeURIComponent });
- * ```
- * @since v0.1.25
- * @param str The URL query string to parse
- * @param [sep='&'] The substring used to delimit key and value pairs in the query string.
- * @param [eq='='] . The substring used to delimit keys and values in the query string.
- */
- function parse(str: string, sep?: string, eq?: string, options?: ParseOptions): ParsedUrlQuery;
- /**
- * The querystring.encode() function is an alias for querystring.stringify().
- */
- const encode: typeof stringify;
- /**
- * The querystring.decode() function is an alias for querystring.parse().
- */
- const decode: typeof parse;
- /**
- * The `querystring.escape()` method performs URL percent-encoding on the given`str` in a manner that is optimized for the specific requirements of URL
- * query strings.
- *
- * The `querystring.escape()` method is used by `querystring.stringify()` and is
- * generally not expected to be used directly. It is exported primarily to allow
- * application code to provide a replacement percent-encoding implementation if
- * necessary by assigning `querystring.escape` to an alternative function.
- * @since v0.1.25
- */
- function escape(str: string): string;
- /**
- * The `querystring.unescape()` method performs decoding of URL percent-encoded
- * characters on the given `str`.
- *
- * The `querystring.unescape()` method is used by `querystring.parse()` and is
- * generally not expected to be used directly. It is exported primarily to allow
- * application code to provide a replacement decoding implementation if
- * necessary by assigning `querystring.unescape` to an alternative function.
- *
- * By default, the `querystring.unescape()` method will attempt to use the
- * JavaScript built-in `decodeURIComponent()` method to decode. If that fails,
- * a safer equivalent that does not throw on malformed URLs will be used.
- * @since v0.1.25
- */
- function unescape(str: string): string;
-}
-declare module 'node:querystring' {
- export * from 'querystring';
-}
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/French Christmas Celebration Part 2 ((INSTALL)).md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/French Christmas Celebration Part 2 ((INSTALL)).md
deleted file mode 100644
index 0d08fc7e4a2dc67d472df5c0b06b018cccf52225..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/French Christmas Celebration Part 2 ((INSTALL)).md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
christmas is a very french festival, so you will find that there are lots of things you can do in the run up to christmas day. there are many opportunities to celebrate in france during the holiday season. lets look at some of the main ones.
-
cant find a christmas market in paris? no worries! french christmas markets are also available in strasbourg and besancon. not only do you get to visit a holiday market, but you get to see a christmas market and enjoy christmas foods.
christmas is a wonderful holiday. the food is wonderful, the sights and activities are great and the french christmas is so much fun. if you havent celebrated a traditional christmas yet, the french christmas is a great place to start. lets take a look at the essentials for planning a traditional french christmas.
-
to start, i think it is important to mention that not all people who celebrate christmas here are practicing christians. there are many different ways to celebrate christmas in france, and i encourage you to try a few of them, and find out what works best for you. so, what do you do to celebrate the french christmas?
-
as i mentioned above, not all people who celebrate christmas here are practicing christians. if youre open to other ways of celebrating, youll find there are many non-religious options that are available in france. i recommend the following for a non-religious celebration of christmas in france.
-
to start, i think its important to mention that not all people who celebrate christmas here are practicing christians. there are many different ways to celebrate christmas in france, and i encourage you to try a few of them, and find out what works best for you. so, what do you do to celebrate the french christmas?
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Graphitech Cimagraphi V8 13 MULTILINGUALLz0.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Graphitech Cimagraphi V8 13 MULTILINGUALLz0.md
deleted file mode 100644
index 2f060e881c72b514a59cf52e7f829da9d95801e8..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Graphitech Cimagraphi V8 13 MULTILINGUALLz0.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
-Bluestacks 6.1.6.5643 Mod Rooted Offline Installer · Crack Lego Piratas Del Caribe Pc 39 · Young Goodman Brown a.k.a Download songs ... 1fdad05405
-
-
-
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Fsync Performance on Storage Devices Tips and Tricks for Improving Database Consistency and Durability.md b/spaces/rorallitri/biomedical-language-models/logs/Fsync Performance on Storage Devices Tips and Tricks for Improving Database Consistency and Durability.md
deleted file mode 100644
index 488cda8e3845c8b807ed830e66d179de4166466e..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Fsync Performance on Storage Devices Tips and Tricks for Improving Database Consistency and Durability.md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-
Because the fsync call takes time, it greatly affects the performance of MySQL. Because of this, you probably noticed there are many status variables that relate to fsyncs. To overcome the inherent limitations of the storage devices, group commit allows multiple simultaneous transactions to fsync the log file once for all the transactions waiting for the fsync. There is no need for a transaction to call fsync for a write operation that another transaction already forced to disk. A series of write transactions sent over a single database connection cannot benefit from group commit.
-
With the above number, the possible transaction rates in fully ACID mode is pretty depressing. But those drives were rotating ones, what about SSD drives? SSD are memory devices and are much faster for random IO operations. There are extremely fast for reads and good for writes. But as you will see below, not that great for fsyncs.
The fsync system is not the only system call that persists data to disk. There is also the fdatasync call. fdatasync persists the data to disk but does not update the metadata information like the file size and last update time. Said otherwise, it performs one write operation instead of two. In the Python script, if I replace os.fsync with os.fdatasync, here are the results for a subset of devices:
-
I tested your fsync.py using our SAN HPE 3PAR StoreServ 8400 storage. It is relatively high level flash-based storage device. 10 000 iterations took 19.303s or 1.903 ms per fsync (~ 518 fsyncs / seconds).
-
Upon checkpoint, dirty buffers in shared buffers are written to the page cache managed by kernel. Through an fsync(), these modified blocks are applied to disk. If an fsync() call is successful, all dirty pages from the corresponding file are guaranteed to be persisted on the disk. When there is an fsync to flush the pages to disk, PostgreSQL cannot guarantee a copy of a modified/dirty page. The reason is that writes to storage from the page cache are completely managed by the kernel, and not by PostgreSQL.
-
In a fully durable configuration MySQL tends to be impacted even more by poor fsync() performance. It may need to perform as many as three fsync operations per transaction commit. Group commit reduces the impact on throughput but transaction latency will still be severely impacted
-
With the above number, the possible transaction rates in fully ACID mode is pretty depressing. But those drives were rotating ones, what about SSD drives? SSD are memory devices and are much faster for random IO operations. There are extremely fast for reads, and good for writes. But as you will see below, not that great for fsyncs.
-
A few years ago, Intel introduced a new type of storage devices based on the 3D_XPoint technology and sold under the Optane brand. Those devices are outperforming regular flash devices and have higher endurance. In the context of this post, I found they are also very good at handling the fsync call, something many flash devices are not great at doing.
-
The above results are pretty amazing. The fsync performance is on par with a RAID controller with a write cache, for which I got a rate of 23000/s and is much better than a regular NAND based NVMe card like the Intel PC-3700, able to deliver a fsync rate of 7300/s. Even enabling the full ext4 journal, the rate is still excellent although, as expected, cut by about half.
-
-
If you have a large dataset, you can still use the Optane card as a read/write cache and improve fsync performance significantly. I did some tests with two easily available solutions, dm-cache and bcache. In both cases, the Optane card was put in front of an external USB Sata disk and the cache layer set to writeback.
-
In my previous post Testing Samsung storage in tpcc-mysql benchmark of Percona Server I compared different Samsung devices. Most solid state drives (SSDs) use 4KiB as an internal page size, and the InnoDB default page size is 16KiB. I wondered how using a different innodb_page_size might affect the overall performance.
-
While working on the service architecture for one of our projects, I considered several SATA SSD options as the possible main storage for the data. The system will be quite write intensive, so the main interest is the write performance on capacities close to full-size storage.
-
Persistent disks are networked storage and generally have higher latencycompared to physical disks or local SSDs.To reach the maximum performance limits of your persistent disks, you mustissue enough I/O requests in parallel. To check if you're using a high enoughqueue depth to reach your required performance levels, seeI/O queue depth.
-
NAND flash memory has been used widely in various mobile devices like smartphones, tablets and MP3 players. Furthermore, server systems started utilizing flash devices as their primary storage. Despite its broad use, flash memory has several limitations, like erase-before-write requirement, the need to write on erased blocks sequentially and limited write cycles per erase block.
-
These file systems directly access NAND flash memories while addressing all the chip-level issues such as wear-levelling and bad-block management. Unlike these systems, F2FS targets flash storage devices that come with a dedicated controller and firmware (FTL) to handle low-level tasks. Such flash storage devices are more commonplace.
-
F2FS was designed from scratch to optimize the performance and lifetime of flash devices with a generic block interface. It builds on the concept of the Log-Structured Filesystem (LFS), but also introduces a number of new design considerations:
-
Applications like database (e.g., SQLite) frequently write small data to a file and conduct fsync to guarantee durability. A naive approach to supporting fsync would be to trigger checkpointing and recover data with the roll-back model. However, this approach leads to poor performance, as checkpointing involves writing all node and dentry blocks unrelated to the database file. F2FS implements an efficient roll-forward recovery mechanism to enhance fsync performance. The key idea is to write data blocks and their direct node blocks only, excluding other node or F2FS metadata blocks. In order to find the data blocks selectively after rolling back to the stable checkpoint, F2FS retains a special flag inside direct node blocks.
-
Experimental results showed that adaptive logging is critical to sustain performance at high storage utilization levels. The adaptive logging policy is also shown to effectively limit the performance degradation of F2FS due to fragmentation.
-
Some storage optimized instance types provide the ability to control processor C-states and P-states on Linux. C-states control the sleep levels that a core can enter when it is inactive, while P-states control the desired performance (in CPU frequency) from a core. For more information, see Processor state control for your EC2 instance.
-
SSD controllers can use several strategies to reduce the impact of write amplification. One such strategy is to reserve space in the SSD instance storage so that the controller can more efficiently manage the space available for write operations. This is called over-provisioning. The SSD-based instance store volumes provided to an instance don't have any space reserved for over-provisioning. To reduce write amplification, we recommend that you leave 10% of the volume unpartitioned so that the SSD controller can use it for over-provisioning. This decreases the storage that you can use, but increases performance even if the disk is close to full capacity.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Garden Planner 3.5.20 Key - _BEST_ Crackingpatching Serial Key Keygen [PATCHED].md b/spaces/rorallitri/biomedical-language-models/logs/Garden Planner 3.5.20 Key - _BEST_ Crackingpatching Serial Key Keygen [PATCHED].md
deleted file mode 100644
index e3840ca48d1b69ba07dc66d38c6210aafb078595..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Garden Planner 3.5.20 Key - _BEST_ Crackingpatching Serial Key Keygen [PATCHED].md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Garden Planner 3.5.20 Key - Crackingpatching Serial Key Keygen [PATCHED]
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/ruangguru/ds-chatbot-internal/README.md b/spaces/ruangguru/ds-chatbot-internal/README.md
deleted file mode 100644
index 75a2e362ac07d8c3a859a9e13c295b4f0d9187d1..0000000000000000000000000000000000000000
--- a/spaces/ruangguru/ds-chatbot-internal/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Ds Chatbot Internal
-emoji: 🌖
-colorFrom: green
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.28.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/rushic24/Priyanka-Chopra-TTS/synthesis/synthesize.py b/spaces/rushic24/Priyanka-Chopra-TTS/synthesis/synthesize.py
deleted file mode 100644
index 77170318de5bd6d225329a7e0b045b47a5c328b7..0000000000000000000000000000000000000000
--- a/spaces/rushic24/Priyanka-Chopra-TTS/synthesis/synthesize.py
+++ /dev/null
@@ -1,233 +0,0 @@
-import argparse
-import os
-import matplotlib.pyplot as plt
-import torch
-import numpy as np
-import matplotlib
-from scipy.io.wavfile import write
-from os.path import dirname, abspath
-import sys
-
-import nltk
-
-nltk.download("punkt")
-
-sys.path.append(dirname(dirname(abspath(__file__))))
-matplotlib.use("Agg")
-
-from training.tacotron2_model import Tacotron2
-from training.clean_text import clean_text
-from training import DEFAULT_ALPHABET
-from synthesis.vocoders import Hifigan
-
-
-def load_model(model_path):
- """
- Loads the Tacotron2 model.
- Uses GPU if available, otherwise uses CPU.
-
- Parameters
- ----------
- model_path : str
- Path to tacotron2 model
-
- Returns
- -------
- Tacotron2
- Loaded tacotron2 model
- """
- if torch.cuda.is_available():
- model = Tacotron2().cuda()
- model.load_state_dict(torch.load(model_path)["state_dict"])
- _ = model.cuda().eval().half()
- else:
- model = Tacotron2()
- model.load_state_dict(torch.load(model_path, map_location=torch.device("cpu"))["state_dict"])
- return model
-
-
-def generate_graph(alignments, filepath, heading=""):
- """
- Generates synthesis alignment graph image.
-
- Parameters
- ----------
- alignments : list
- Numpy alignment data
- filepath : str
- Path to save image to
- heading : str (optional)
- Graph heading
- """
- data = alignments.float().data.cpu().numpy()[0].T
- plt.imshow(data, aspect="auto", origin="lower", interpolation="none")
- if heading:
- plt.title(heading)
- plt.savefig(filepath)
-
-
-def text_to_sequence(text, symbols):
- """
- Generates text sequence for audio file
-
- Parameters
- ----------
- text : str
- Text to synthesize
- symbols : list
- List of valid symbols
- """
- symbol_to_id = {s: i for i, s in enumerate(symbols)}
- sequence = np.array([[symbol_to_id[s] for s in text if s in symbol_to_id]])
- if torch.cuda.is_available():
- return torch.autograd.Variable(torch.from_numpy(sequence)).cuda().long()
- else:
- return torch.autograd.Variable(torch.from_numpy(sequence)).cpu().long()
-
-
-def join_alignment_graphs(alignments):
- """
- Joins multiple alignment graphs.
-
- Parameters
- ----------
- alignments : list
- List of alignment Tensors
-
- Returns
- -------
- Tensor
- Combined alignment tensor
- """
- alignment_sizes = [a.size() for a in alignments]
- joined = torch.zeros((1, sum([a[1] for a in alignment_sizes]), sum([a[2] for a in alignment_sizes])))
- current_x = 0
- current_y = 0
- for alignment in alignments:
- joined[:, current_x : current_x + alignment.size()[1], current_y : current_y + alignment.size()[2]] = alignment
- current_x += alignment.size()[1]
- current_y += alignment.size()[2]
- return joined
-
-
-def synthesize(
- model,
- text,
- symbols=DEFAULT_ALPHABET,
- graph_path=None,
- audio_path=None,
- vocoder=None,
- silence_padding=0.15,
- sample_rate=22050,
- max_decoder_steps=1000,
- split_text=False,
-):
- """
- Synthesise text for a given model.
- Produces graph and/or audio file when given.
- Supports multi line synthesis (seperated by \n).
-
- Parameters
- ----------
- model : Tacotron2
- Tacotron2 model
- text : str/list
- Text to synthesize (or list of lines to synthesize)
- symbols : list
- List of symbols (default is English)
- graph_path : str (optional)
- Path to save alignment graph to
- audio_path : str (optional)
- Path to save audio file to
- vocoder : Object (optional)
- Vocoder model (required if generating audio)
- silence_padding : float (optional)
- Seconds of silence to seperate each clip by with multi-line synthesis (default is 0.15)
- sample_rate : int (optional)
- Audio sample rate (default is 22050)
- max_decoder_steps : int (optional)
- Max decoder steps controls sequence length and memory usage during inference.
- Increasing this will use more memory but may allow for longer sentences. (default is 1000)
- split_text : bool (optional)
- Whether to use the split text tool to convert a block of text into multiple shorter sentences
- to synthesize (default is True)
-
- Raises
- -------
- AssertionError
- If audio_path is given without a vocoder
- """
- if audio_path:
- assert vocoder, "Missing vocoder"
-
- if not isinstance(text, list) and split_text:
- # Split text into multiple lines
- text = nltk.tokenize.sent_tokenize(text)
-
- if isinstance(text, list):
- # Multi-lines given
- text = [line.strip() for line in text if line.strip()]
- mels = []
- alignments = []
- for line in text:
- text = clean_text(line, symbols)
- sequence = text_to_sequence(text, symbols)
- _, mel_outputs_postnet, _, alignment = model.inference(sequence, max_decoder_steps)
- mels.append(mel_outputs_postnet)
- alignments.append(alignment)
-
- if graph_path:
- generate_graph(join_alignment_graphs(alignments), graph_path)
-
- if audio_path:
- silence = np.zeros(int(silence_padding * sample_rate)).astype("int16")
- audio_segments = []
- for i in range(len(mels)):
- audio_segments.append(vocoder.generate_audio(mels[i]))
- if i != len(mels) - 1:
- audio_segments.append(silence)
-
- audio = np.concatenate(audio_segments)
- write(audio_path, sample_rate, audio)
- else:
- # Single sentence
- text = clean_text(text.strip(), symbols)
- sequence = text_to_sequence(text, symbols)
- _, mel_outputs_postnet, _, alignment = model.inference(sequence, max_decoder_steps)
-
- if graph_path:
- generate_graph(alignment, graph_path)
-
- if audio_path:
- audio = vocoder.generate_audio(mel_outputs_postnet)
- write(audio_path, sample_rate, audio)
-
-
-if __name__ == "__main__":
- """Synthesize audio using model and vocoder"""
- parser = argparse.ArgumentParser(description="Synthesize audio using model and vocoder")
- parser.add_argument("-m", "--model_path", type=str, help="tacotron2 model path", required=True)
- parser.add_argument("-vm", "--vocoder_model_path", type=str, help="vocoder model path", required=True)
- parser.add_argument("-hc", "--hifigan_config_path", type=str, help="hifigan_config path", required=True)
- parser.add_argument("-t", "--text", type=str, help="text to synthesize", required=True)
- parser.add_argument("-g", "--graph_output_path", type=str, help="path to save alignment graph to", required=False)
- parser.add_argument("-a", "--audio_output_path", type=str, help="path to save output audio to", required=False)
- parser.add_argument("--silence_padding", type=float, help="Padding between sentences in seconds", default=0.15)
- parser.add_argument("--sample_rate", type=int, help="Audio sample rate", default=22050)
- args = parser.parse_args()
-
- assert os.path.isfile(args.model_path), "Model not found"
- assert os.path.isfile(args.vocoder_model_path), "vocoder model not found"
-
- model = load_model(args.model_path)
- vocoder = Hifigan(args.vocoder_model_path, args.hifigan_config_path)
-
- synthesize(
- model=model,
- text=args.text,
- graph_path=args.graph_output_path,
- audio_path=args.audio_output_path,
- vocoder=vocoder,
- silence_padding=args.silence_padding,
- sample_rate=args.sample_rate,
- )
diff --git a/spaces/rzzgate/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/pipeline_stable_diffusion_controlnet_inpaint.py b/spaces/rzzgate/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/pipeline_stable_diffusion_controlnet_inpaint.py
deleted file mode 100644
index 8e961183802ae29d19b0df4da6d0da4aaba66bfb..0000000000000000000000000000000000000000
--- a/spaces/rzzgate/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/pipeline_stable_diffusion_controlnet_inpaint.py
+++ /dev/null
@@ -1,610 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-import numpy as np
-import PIL.Image
-import torch
-from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_controlnet import *
-
-# https://github.com/mikonvergence/ControlNetInpaint
-
-EXAMPLE_DOC_STRING = """
- Examples:
- ```py
- >>> # !pip install opencv-python transformers accelerate
- >>> from diffusers import StableDiffusionControlNetInpaintPipeline, ControlNetModel, UniPCMultistepScheduler
- >>> from diffusers.utils import load_image
- >>> import numpy as np
- >>> import torch
-
- >>> import cv2
- >>> from PIL import Image
- >>> # download an image
- >>> image = load_image(
- ... "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
- ... )
- >>> image = np.array(image)
- >>> mask_image = load_image(
- ... "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
- ... )
- >>> mask_image = np.array(mask_image)
- >>> # get canny image
- >>> canny_image = cv2.Canny(image, 100, 200)
- >>> canny_image = canny_image[:, :, None]
- >>> canny_image = np.concatenate([canny_image, canny_image, canny_image], axis=2)
- >>> canny_image = Image.fromarray(canny_image)
-
- >>> # load control net and stable diffusion v1-5
- >>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16)
- >>> pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained(
- ... "runwayml/stable-diffusion-inpainting", controlnet=controlnet, torch_dtype=torch.float16
- ... )
-
- >>> # speed up diffusion process with faster scheduler and memory optimization
- >>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
- >>> # remove following line if xformers is not installed
- >>> pipe.enable_xformers_memory_efficient_attention()
-
- >>> pipe.enable_model_cpu_offload()
-
- >>> # generate image
- >>> generator = torch.manual_seed(0)
- >>> image = pipe(
- ... "futuristic-looking doggo",
- ... num_inference_steps=20,
- ... generator=generator,
- ... image=image,
- ... control_image=canny_image,
- ... mask_image=mask_image
- ... ).images[0]
- ```
-"""
-
-
-def prepare_mask_and_masked_image(image, mask):
- """
- Prepares a pair (image, mask) to be consumed by the Stable Diffusion pipeline. This means that those inputs will be
- converted to ``torch.Tensor`` with shapes ``batch x channels x height x width`` where ``channels`` is ``3`` for the
- ``image`` and ``1`` for the ``mask``.
- The ``image`` will be converted to ``torch.float32`` and normalized to be in ``[-1, 1]``. The ``mask`` will be
- binarized (``mask > 0.5``) and cast to ``torch.float32`` too.
- Args:
- image (Union[np.array, PIL.Image, torch.Tensor]): The image to inpaint.
- It can be a ``PIL.Image``, or a ``height x width x 3`` ``np.array`` or a ``channels x height x width``
- ``torch.Tensor`` or a ``batch x channels x height x width`` ``torch.Tensor``.
- mask (_type_): The mask to apply to the image, i.e. regions to inpaint.
- It can be a ``PIL.Image``, or a ``height x width`` ``np.array`` or a ``1 x height x width``
- ``torch.Tensor`` or a ``batch x 1 x height x width`` ``torch.Tensor``.
- Raises:
- ValueError: ``torch.Tensor`` images should be in the ``[-1, 1]`` range. ValueError: ``torch.Tensor`` mask
- should be in the ``[0, 1]`` range. ValueError: ``mask`` and ``image`` should have the same spatial dimensions.
- TypeError: ``mask`` is a ``torch.Tensor`` but ``image`` is not
- (ot the other way around).
- Returns:
- tuple[torch.Tensor]: The pair (mask, masked_image) as ``torch.Tensor`` with 4
- dimensions: ``batch x channels x height x width``.
- """
- if isinstance(image, torch.Tensor):
- if not isinstance(mask, torch.Tensor):
- raise TypeError(
- f"`image` is a torch.Tensor but `mask` (type: {type(mask)} is not"
- )
-
- # Batch single image
- if image.ndim == 3:
- assert (
- image.shape[0] == 3
- ), "Image outside a batch should be of shape (3, H, W)"
- image = image.unsqueeze(0)
-
- # Batch and add channel dim for single mask
- if mask.ndim == 2:
- mask = mask.unsqueeze(0).unsqueeze(0)
-
- # Batch single mask or add channel dim
- if mask.ndim == 3:
- # Single batched mask, no channel dim or single mask not batched but channel dim
- if mask.shape[0] == 1:
- mask = mask.unsqueeze(0)
-
- # Batched masks no channel dim
- else:
- mask = mask.unsqueeze(1)
-
- assert (
- image.ndim == 4 and mask.ndim == 4
- ), "Image and Mask must have 4 dimensions"
- assert (
- image.shape[-2:] == mask.shape[-2:]
- ), "Image and Mask must have the same spatial dimensions"
- assert (
- image.shape[0] == mask.shape[0]
- ), "Image and Mask must have the same batch size"
-
- # Check image is in [-1, 1]
- if image.min() < -1 or image.max() > 1:
- raise ValueError("Image should be in [-1, 1] range")
-
- # Check mask is in [0, 1]
- if mask.min() < 0 or mask.max() > 1:
- raise ValueError("Mask should be in [0, 1] range")
-
- # Binarize mask
- mask[mask < 0.5] = 0
- mask[mask >= 0.5] = 1
-
- # Image as float32
- image = image.to(dtype=torch.float32)
- elif isinstance(mask, torch.Tensor):
- raise TypeError(
- f"`mask` is a torch.Tensor but `image` (type: {type(image)} is not"
- )
- else:
- # preprocess image
- if isinstance(image, (PIL.Image.Image, np.ndarray)):
- image = [image]
-
- if isinstance(image, list) and isinstance(image[0], PIL.Image.Image):
- image = [np.array(i.convert("RGB"))[None, :] for i in image]
- image = np.concatenate(image, axis=0)
- elif isinstance(image, list) and isinstance(image[0], np.ndarray):
- image = np.concatenate([i[None, :] for i in image], axis=0)
-
- image = image.transpose(0, 3, 1, 2)
- image = torch.from_numpy(image).to(dtype=torch.float32) / 127.5 - 1.0
-
- # preprocess mask
- if isinstance(mask, (PIL.Image.Image, np.ndarray)):
- mask = [mask]
-
- if isinstance(mask, list) and isinstance(mask[0], PIL.Image.Image):
- mask = np.concatenate(
- [np.array(m.convert("L"))[None, None, :] for m in mask], axis=0
- )
- mask = mask.astype(np.float32) / 255.0
- elif isinstance(mask, list) and isinstance(mask[0], np.ndarray):
- mask = np.concatenate([m[None, None, :] for m in mask], axis=0)
-
- mask[mask < 0.5] = 0
- mask[mask >= 0.5] = 1
- mask = torch.from_numpy(mask)
-
- masked_image = image * (mask < 0.5)
-
- return mask, masked_image
-
-
-class StableDiffusionControlNetInpaintPipeline(
- StableDiffusionControlNetPipeline
-):
- r"""
- Pipeline for text-guided image inpainting using Stable Diffusion with ControlNet guidance.
-
- This model inherits from [`StableDiffusionControlNetPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
-
- Args:
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- text_encoder ([`CLIPTextModel`]):
- Frozen text-encoder. Stable Diffusion uses the text portion of
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
- the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- tokenizer (`CLIPTokenizer`):
- Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
- controlnet ([`ControlNetModel`]):
- Provides additional conditioning to the unet during the denoising process
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
- safety_checker ([`StableDiffusionSafetyChecker`]):
- Classification module that estimates whether generated images could be considered offensive or harmful.
- Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
- feature_extractor ([`CLIPFeatureExtractor`]):
- Model that extracts features from generated images to be used as inputs for the `safety_checker`.
- """
-
- def prepare_mask_latents(
- self,
- mask,
- masked_image,
- batch_size,
- height,
- width,
- dtype,
- device,
- generator,
- do_classifier_free_guidance,
- ):
- # resize the mask to latents shape as we concatenate the mask to the latents
- # we do that before converting to dtype to avoid breaking in case we're using cpu_offload
- # and half precision
- mask = torch.nn.functional.interpolate(
- mask,
- size=(
- height // self.vae_scale_factor,
- width // self.vae_scale_factor,
- ),
- )
- mask = mask.to(device=device, dtype=dtype)
-
- masked_image = masked_image.to(device=device, dtype=dtype)
-
- # encode the mask image into latents space so we can concatenate it to the latents
- if isinstance(generator, list):
- masked_image_latents = [
- self.vae.encode(masked_image[i : i + 1]).latent_dist.sample(
- generator=generator[i]
- )
- for i in range(batch_size)
- ]
- masked_image_latents = torch.cat(masked_image_latents, dim=0)
- else:
- masked_image_latents = self.vae.encode(
- masked_image
- ).latent_dist.sample(generator=generator)
- masked_image_latents = (
- self.vae.config.scaling_factor * masked_image_latents
- )
-
- # duplicate mask and masked_image_latents for each generation per prompt, using mps friendly method
- if mask.shape[0] < batch_size:
- if not batch_size % mask.shape[0] == 0:
- raise ValueError(
- "The passed mask and the required batch size don't match. Masks are supposed to be duplicated to"
- f" a total batch size of {batch_size}, but {mask.shape[0]} masks were passed. Make sure the number"
- " of masks that you pass is divisible by the total requested batch size."
- )
- mask = mask.repeat(batch_size // mask.shape[0], 1, 1, 1)
- if masked_image_latents.shape[0] < batch_size:
- if not batch_size % masked_image_latents.shape[0] == 0:
- raise ValueError(
- "The passed images and the required batch size don't match. Images are supposed to be duplicated"
- f" to a total batch size of {batch_size}, but {masked_image_latents.shape[0]} images were passed."
- " Make sure the number of images that you pass is divisible by the total requested batch size."
- )
- masked_image_latents = masked_image_latents.repeat(
- batch_size // masked_image_latents.shape[0], 1, 1, 1
- )
-
- mask = torch.cat([mask] * 2) if do_classifier_free_guidance else mask
- masked_image_latents = (
- torch.cat([masked_image_latents] * 2)
- if do_classifier_free_guidance
- else masked_image_latents
- )
-
- # aligning device to prevent device errors when concating it with the latent model input
- masked_image_latents = masked_image_latents.to(
- device=device, dtype=dtype
- )
- return mask, masked_image_latents
-
- @torch.no_grad()
- @replace_example_docstring(EXAMPLE_DOC_STRING)
- def __call__(
- self,
- prompt: Union[str, List[str]] = None,
- image: Union[torch.FloatTensor, PIL.Image.Image] = None,
- control_image: Union[
- torch.FloatTensor,
- PIL.Image.Image,
- List[torch.FloatTensor],
- List[PIL.Image.Image],
- ] = None,
- mask_image: Union[torch.FloatTensor, PIL.Image.Image] = None,
- height: Optional[int] = None,
- width: Optional[int] = None,
- num_inference_steps: int = 50,
- guidance_scale: float = 7.5,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- num_images_per_prompt: Optional[int] = 1,
- eta: float = 0.0,
- generator: Optional[
- Union[torch.Generator, List[torch.Generator]]
- ] = None,
- latents: Optional[torch.FloatTensor] = None,
- prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[
- Callable[[int, int, torch.FloatTensor], None]
- ] = None,
- callback_steps: int = 1,
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
- controlnet_conditioning_scale: float = 1.0,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
- Args:
- prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
- instead.
- image (`PIL.Image.Image`):
- `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
- be masked out with `mask_image` and repainted according to `prompt`.
- control_image (`torch.FloatTensor`, `PIL.Image.Image`, `List[torch.FloatTensor]` or `List[PIL.Image.Image]`):
- The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If
- the type is specified as `Torch.FloatTensor`, it is passed to ControlNet as is. PIL.Image.Image` can
- also be accepted as an image. The control image is automatically resized to fit the output image.
- mask_image (`PIL.Image.Image`):
- `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
- repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted
- to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L)
- instead of 3, so the expected shape would be `(B, H, W, 1)`.
- height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
- The height in pixels of the generated image.
- width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. If not defined, one has to pass
- `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead.
- Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
- One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
- to make generation deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor will ge generated by sampling using the supplied random `generator`.
- prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
- provided, text embeddings will be generated from `prompt` input argument.
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
- argument.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- callback (`Callable`, *optional*):
- A function that will be called every `callback_steps` steps during inference. The function will be
- called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function will be called. If not specified, the callback will be
- called at every step.
- cross_attention_kwargs (`dict`, *optional*):
- A kwargs dictionary that if specified is passed along to the `AttnProcessor` as defined under
- `self.processor` in
- [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
- controlnet_conditioning_scale (`float`, *optional*, defaults to 1.0):
- The outputs of the controlnet are multiplied by `controlnet_conditioning_scale` before they are added
- to the residual in the original unet.
- Examples:
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
- When returning a tuple, the first element is a list with the generated images, and the second element is a
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
- (nsfw) content, according to the `safety_checker`.
- """
- # 0. Default height and width to unet
- height, width = self._default_height_width(height, width, control_image)
-
- # 1. Check inputs. Raise error if not correct
- self.check_inputs(
- prompt,
- control_image,
- height,
- width,
- callback_steps,
- negative_prompt,
- prompt_embeds,
- negative_prompt_embeds,
- )
-
- # 2. Define call parameters
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- device = self._execution_device
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- # 3. Encode input prompt
- prompt_embeds = self._encode_prompt(
- prompt,
- device,
- num_images_per_prompt,
- do_classifier_free_guidance,
- negative_prompt,
- prompt_embeds=prompt_embeds,
- negative_prompt_embeds=negative_prompt_embeds,
- )
-
- # 4. Prepare image
- control_image = self.prepare_image(
- control_image,
- width,
- height,
- batch_size * num_images_per_prompt,
- num_images_per_prompt,
- device,
- self.controlnet.dtype,
- )
-
- if do_classifier_free_guidance:
- control_image = torch.cat([control_image] * 2)
-
- # 5. Prepare timesteps
- self.scheduler.set_timesteps(num_inference_steps, device=device)
- timesteps = self.scheduler.timesteps
-
- # 6. Prepare latent variables
- num_channels_latents = self.controlnet.in_channels
- latents = self.prepare_latents(
- batch_size * num_images_per_prompt,
- num_channels_latents,
- height,
- width,
- prompt_embeds.dtype,
- device,
- generator,
- latents,
- )
-
- # EXTRA: prepare mask latents
- mask, masked_image = prepare_mask_and_masked_image(image, mask_image)
- mask, masked_image_latents = self.prepare_mask_latents(
- mask,
- masked_image,
- batch_size * num_images_per_prompt,
- height,
- width,
- prompt_embeds.dtype,
- device,
- generator,
- do_classifier_free_guidance,
- )
-
- # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
- extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
-
- # 8. Denoising loop
- num_warmup_steps = (
- len(timesteps) - num_inference_steps * self.scheduler.order
- )
- with self.progress_bar(total=num_inference_steps) as progress_bar:
- for i, t in enumerate(timesteps):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = (
- torch.cat([latents] * 2)
- if do_classifier_free_guidance
- else latents
- )
- latent_model_input = self.scheduler.scale_model_input(
- latent_model_input, t
- )
-
- down_block_res_samples, mid_block_res_sample = self.controlnet(
- latent_model_input,
- t,
- encoder_hidden_states=prompt_embeds,
- controlnet_cond=control_image,
- return_dict=False,
- )
-
- down_block_res_samples = [
- down_block_res_sample * controlnet_conditioning_scale
- for down_block_res_sample in down_block_res_samples
- ]
- mid_block_res_sample *= controlnet_conditioning_scale
-
- # predict the noise residual
- latent_model_input = torch.cat(
- [latent_model_input, mask, masked_image_latents], dim=1
- )
- noise_pred = self.unet(
- latent_model_input,
- t,
- encoder_hidden_states=prompt_embeds,
- cross_attention_kwargs=cross_attention_kwargs,
- down_block_additional_residuals=down_block_res_samples,
- mid_block_additional_residual=mid_block_res_sample,
- ).sample
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (
- noise_pred_text - noise_pred_uncond
- )
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(
- noise_pred, t, latents, **extra_step_kwargs
- ).prev_sample
-
- # call the callback, if provided
- if i == len(timesteps) - 1 or (
- (i + 1) > num_warmup_steps
- and (i + 1) % self.scheduler.order == 0
- ):
- progress_bar.update()
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- # If we do sequential model offloading, let's offload unet and controlnet
- # manually for max memory savings
- if (
- hasattr(self, "final_offload_hook")
- and self.final_offload_hook is not None
- ):
- self.unet.to("cpu")
- self.controlnet.to("cpu")
- torch.cuda.empty_cache()
-
- if output_type == "latent":
- image = latents
- has_nsfw_concept = None
- elif output_type == "pil":
- # 8. Post-processing
- image = self.decode_latents(latents)
-
- # 9. Run safety checker
- image, has_nsfw_concept = self.run_safety_checker(
- image, device, prompt_embeds.dtype
- )
-
- # 10. Convert to PIL
- image = self.numpy_to_pil(image)
- else:
- # 8. Post-processing
- image = self.decode_latents(latents)
-
- # 9. Run safety checker
- image, has_nsfw_concept = self.run_safety_checker(
- image, device, prompt_embeds.dtype
- )
-
- # Offload last model to CPU
- if (
- hasattr(self, "final_offload_hook")
- and self.final_offload_hook is not None
- ):
- self.final_offload_hook.offload()
-
- if not return_dict:
- return (image, has_nsfw_concept)
-
- return StableDiffusionPipelineOutput(
- images=image, nsfw_content_detected=has_nsfw_concept
- )
diff --git a/spaces/sam-hq-team/sam-hq/GroundingDINO/groundingdino/util/inference.py b/spaces/sam-hq-team/sam-hq/GroundingDINO/groundingdino/util/inference.py
deleted file mode 100644
index 8168b96ca51e6e494c7c675c2f4a610e21b095d6..0000000000000000000000000000000000000000
--- a/spaces/sam-hq-team/sam-hq/GroundingDINO/groundingdino/util/inference.py
+++ /dev/null
@@ -1,98 +0,0 @@
-from typing import Tuple, List
-
-import cv2
-import numpy as np
-import supervision as sv
-import torch
-from PIL import Image
-from torchvision.ops import box_convert
-
-import groundingdino.datasets.transforms as T
-from groundingdino.models import build_model
-from groundingdino.util.misc import clean_state_dict
-from groundingdino.util.slconfig import SLConfig
-from groundingdino.util.utils import get_phrases_from_posmap
-
-
-def preprocess_caption(caption: str) -> str:
- result = caption.lower().strip()
- if result.endswith("."):
- return result
- return result + "."
-
-
-def load_model(model_config_path: str, model_checkpoint_path: str, device: str = "cuda"):
- args = SLConfig.fromfile(model_config_path)
- args.device = device
- model = build_model(args)
- checkpoint = torch.load(model_checkpoint_path, map_location="cpu")
- model.load_state_dict(clean_state_dict(checkpoint["model"]), strict=False)
- model.eval()
- return model
-
-
-def load_image(image_path: str) -> Tuple[np.array, torch.Tensor]:
- transform = T.Compose(
- [
- T.RandomResize([800], max_size=1333),
- T.ToTensor(),
- T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
- ]
- )
- image_source = Image.open(image_path).convert("RGB")
- image = np.asarray(image_source)
- image_transformed, _ = transform(image_source, None)
- return image, image_transformed
-
-
-def predict(
- model,
- image: torch.Tensor,
- caption: str,
- box_threshold: float,
- text_threshold: float,
- device: str = "cuda"
-) -> Tuple[torch.Tensor, torch.Tensor, List[str]]:
- caption = preprocess_caption(caption=caption)
-
- model = model.to(device)
- image = image.to(device)
-
- with torch.no_grad():
- outputs = model(image[None], captions=[caption])
-
- prediction_logits = outputs["pred_logits"].cpu().sigmoid()[0] # prediction_logits.shape = (nq, 256)
- prediction_boxes = outputs["pred_boxes"].cpu()[0] # prediction_boxes.shape = (nq, 4)
-
- mask = prediction_logits.max(dim=1)[0] > box_threshold
- logits = prediction_logits[mask] # logits.shape = (n, 256)
- boxes = prediction_boxes[mask] # boxes.shape = (n, 4)
-
- tokenizer = model.tokenizer
- tokenized = tokenizer(caption)
-
- phrases = [
- get_phrases_from_posmap(logit > text_threshold, tokenized, tokenizer).replace('.', '')
- for logit
- in logits
- ]
-
- return boxes, logits.max(dim=1)[0], phrases
-
-
-def annotate(image_source: np.ndarray, boxes: torch.Tensor, logits: torch.Tensor, phrases: List[str]) -> np.ndarray:
- h, w, _ = image_source.shape
- boxes = boxes * torch.Tensor([w, h, w, h])
- xyxy = box_convert(boxes=boxes, in_fmt="cxcywh", out_fmt="xyxy").numpy()
- detections = sv.Detections(xyxy=xyxy)
-
- labels = [
- f"{phrase} {logit:.2f}"
- for phrase, logit
- in zip(phrases, logits)
- ]
-
- box_annotator = sv.BoxAnnotator()
- annotated_frame = cv2.cvtColor(image_source, cv2.COLOR_RGB2BGR)
- annotated_frame = box_annotator.annotate(scene=annotated_frame, detections=detections, labels=labels)
- return annotated_frame
diff --git a/spaces/sarinam/speaker-anonymization/IMSToucan/InferenceInterfaces/AnonFastSpeech2.py b/spaces/sarinam/speaker-anonymization/IMSToucan/InferenceInterfaces/AnonFastSpeech2.py
deleted file mode 100644
index 2560e2a8b891773ff4c4f7a608b89b8ac4518194..0000000000000000000000000000000000000000
--- a/spaces/sarinam/speaker-anonymization/IMSToucan/InferenceInterfaces/AnonFastSpeech2.py
+++ /dev/null
@@ -1,91 +0,0 @@
-import librosa.display as lbd
-import matplotlib.pyplot as plt
-import soundfile
-import torch
-
-from .InferenceArchitectures.InferenceFastSpeech2 import FastSpeech2
-from .InferenceArchitectures.InferenceHiFiGAN import HiFiGANGenerator
-from ..Preprocessing.ArticulatoryCombinedTextFrontend import ArticulatoryCombinedTextFrontend
-from ..Preprocessing.ArticulatoryCombinedTextFrontend import get_language_id
-
-
-class AnonFastSpeech2(torch.nn.Module):
-
- def __init__(self, device: str, path_to_hifigan_model: str, path_to_fastspeech_model: str):
- """
- Args:
- device: Device to run on. CPU is feasible, still faster than real-time, but a GPU is significantly faster.
- path_to_hifigan_model: Path to the vocoder model, including filename and suffix.
- path_to_fastspeech_model: Path to the synthesis model, including filename and suffix.
-
- """
- super().__init__()
- language = "en"
- self.device = device
- self.text2phone = ArticulatoryCombinedTextFrontend(language=language, add_silence_to_end=True)
- checkpoint = torch.load(path_to_fastspeech_model, map_location='cpu')
- self.phone2mel = FastSpeech2(weights=checkpoint["model"], lang_embs=None).to(torch.device(device))
- self.mel2wav = HiFiGANGenerator(path_to_weights=path_to_hifigan_model).to(torch.device(device))
- self.default_utterance_embedding = checkpoint["default_emb"].to(self.device)
- self.phone2mel.eval()
- self.mel2wav.eval()
- self.lang_id = get_language_id(language)
- self.to(torch.device(device))
-
- def forward(self, text, view=False, text_is_phonemes=False):
- """
- Args:
- text: The text that the TTS should convert to speech
- view: Boolean flag whether to produce and display a graphic showing the generated audio
- text_is_phonemes: Boolean flag whether the text parameter contains phonemes (True) or graphemes (False)
-
- Returns:
- 48kHz waveform as 1d tensor
-
- """
- with torch.no_grad():
- phones = self.text2phone.string_to_tensor(text, input_phonemes=text_is_phonemes).to(torch.device(self.device))
- mel, durations, pitch, energy = self.phone2mel(phones,
- return_duration_pitch_energy=True,
- utterance_embedding=self.default_utterance_embedding)
- mel = mel.transpose(0, 1)
- wave = self.mel2wav(mel)
- if view:
- from Utility.utils import cumsum_durations
- fig, ax = plt.subplots(nrows=2, ncols=1)
- ax[0].plot(wave.cpu().numpy())
- lbd.specshow(mel.cpu().numpy(),
- ax=ax[1],
- sr=16000,
- cmap='GnBu',
- y_axis='mel',
- x_axis=None,
- hop_length=256)
- ax[0].yaxis.set_visible(False)
- ax[1].yaxis.set_visible(False)
- duration_splits, label_positions = cumsum_durations(durations.cpu().numpy())
- ax[1].set_xticks(duration_splits, minor=True)
- ax[1].xaxis.grid(True, which='minor')
- ax[1].set_xticks(label_positions, minor=False)
- ax[1].set_xticklabels(self.text2phone.get_phone_string(text))
- ax[0].set_title(text)
- plt.subplots_adjust(left=0.05, bottom=0.1, right=0.95, top=.9, wspace=0.0, hspace=0.0)
- plt.show()
- return wave
-
- def anonymize_to_file(self, text: str, text_is_phonemes: bool, target_speaker_embedding: torch.tensor, path_to_result_file: str):
- """
- Args:
- text: The text that the TTS should convert to speech
- text_is_phonemes: Boolean flag whether the text parameter contains phonemes (True) or graphemes (False)
- target_speaker_embedding: The speaker embedding that should be used for the produced speech
- path_to_result_file: The path to the location where the resulting speech should be saved (including the filename and .wav suffix)
-
- """
-
- assert text.strip() != ""
- assert path_to_result_file.endswith(".wav")
-
- self.default_utterance_embedding = target_speaker_embedding.to(self.device)
- wav = self(text=text, text_is_phonemes=text_is_phonemes)
- soundfile.write(file=path_to_result_file, data=wav.cpu().numpy(), samplerate=48000)
diff --git a/spaces/scedlatioru/img-to-music/example/HD Online Player (Ek Villain 1080p Bluray Movie Downlo).md b/spaces/scedlatioru/img-to-music/example/HD Online Player (Ek Villain 1080p Bluray Movie Downlo).md
deleted file mode 100644
index 803be3e61b848138c70524357a970683c60d8ad0..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/HD Online Player (Ek Villain 1080p Bluray Movie Downlo).md
+++ /dev/null
@@ -1,16 +0,0 @@
-
HD Online Player (Ek Villain 1080p Bluray Movie Downlo)
-
-The two have a daughter, Ranveer's first child with Aisha, born when his wife was only twenty one years old. She was not happy with her marriage, and was not prepared to move in with Guru. Aisha Singh "Aisha" is an Indian film and television actress who is a famous actor and model. However, they soon get married. Aisha Singh was born on 7 September, in New Delhi, India.
-
-Watch Ek Villain - Hindi Thriller full movie on Disney+ Hotstar now. Aisha's father is a police officer. Guru gets injured in an encounter in which an armed squad consisting of both Rajasthan and Delhi police surrounded Guru's home in the early hours of the day.. An Indian television actress and producer, who is also a trained classical dancer, Miss Aisha Singh is the first female Indian artiste to enter and win the Eurovision Song Contest as a music director of the song Gagan mein and win the title of Miss Universe.
-
-Top Trending News
-
-Guru, a goon, marries Aisha and decides to make a fresh start.. Watch Ek Villain - Hindi Thriller full movie on Disney+ Hotstar now. The two have a daughter, Ranveer's first child with Aisha, born when his wife was only twenty one years old. She was not happy with her marriage, and was not prepared to move in with Guru. Aisha Singh "Aisha" is an Indian film and television actress who is a famous actor and model. However, they soon get married. Aisha Singh was born on 7 September, in New Delhi, India.
-
-Watch Ek Villain - Hindi Thriller full movie on Disney+ Hotstar now. The two have a daughter, Ranveer's first child with Aisha, born when his wife was only twenty one years old. She was not happy with her marriage, and was not prepared to move in with Guru. Aisha Singh "Aisha" is an Indian film and television actress who is a famous actor and model. However, they soon get married. Aisha Singh was born on 7 September, in New Delhi, India.
-
-Watch Ek Villain - Hindi Thriller full movie on Disney+ Hotstar now. Aisha's father is a police officer. Guru gets injured in an encounter in which an armed squad consisting of both Rajasthan and Delhi police surrounded Guru's home in the early hours of the day.. An Indian television actress and 4fefd39f24
-
-
-
diff --git a/spaces/seanshahkarami/clip-explorer/README.md b/spaces/seanshahkarami/clip-explorer/README.md
deleted file mode 100644
index df64a9d6f528002d23c2b195d9701b4b47ca1741..0000000000000000000000000000000000000000
--- a/spaces/seanshahkarami/clip-explorer/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Clip Explorer
-emoji: 🏁
-colorFrom: pink
-colorTo: red
-sdk: gradio
-sdk_version: 3.18.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/shahukareem/Wav2Vec2-Large-XLSR-53-Dhivehi/README.md b/spaces/shahukareem/Wav2Vec2-Large-XLSR-53-Dhivehi/README.md
deleted file mode 100644
index c5e908d5dc5aff663d90d6669e78c0ffec765ce4..0000000000000000000000000000000000000000
--- a/spaces/shahukareem/Wav2Vec2-Large-XLSR-53-Dhivehi/README.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: Wav2Vec2 Large XLSR 53 Dhivehi
-emoji: ⚡
-colorFrom: gray
-colorTo: red
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/shencc/gpt/crazy_functions/test_project/cpp/cppipc/policy.h b/spaces/shencc/gpt/crazy_functions/test_project/cpp/cppipc/policy.h
deleted file mode 100644
index f88ab5d8cb343f97026966b402eaeed8831e356a..0000000000000000000000000000000000000000
--- a/spaces/shencc/gpt/crazy_functions/test_project/cpp/cppipc/policy.h
+++ /dev/null
@@ -1,25 +0,0 @@
-#pragma once
-
-#include
-
-#include "libipc/def.h"
-#include "libipc/prod_cons.h"
-
-#include "libipc/circ/elem_array.h"
-
-namespace ipc {
-namespace policy {
-
-template class Elems, typename Flag>
-struct choose;
-
-template
-struct choose {
- using flag_t = Flag;
-
- template
- using elems_t = circ::elem_array, DataSize, AlignSize>;
-};
-
-} // namespace policy
-} // namespace ipc
diff --git a/spaces/shi-labs/Matting-Anything/GroundingDINO/groundingdino/models/GroundingDINO/bertwarper.py b/spaces/shi-labs/Matting-Anything/GroundingDINO/groundingdino/models/GroundingDINO/bertwarper.py
deleted file mode 100644
index f0cf9779b270e1aead32845006f8b881fcba37ad..0000000000000000000000000000000000000000
--- a/spaces/shi-labs/Matting-Anything/GroundingDINO/groundingdino/models/GroundingDINO/bertwarper.py
+++ /dev/null
@@ -1,273 +0,0 @@
-# ------------------------------------------------------------------------
-# Grounding DINO
-# url: https://github.com/IDEA-Research/GroundingDINO
-# Copyright (c) 2023 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-
-import torch
-import torch.nn.functional as F
-import torch.utils.checkpoint as checkpoint
-from torch import Tensor, nn
-from torchvision.ops.boxes import nms
-from transformers import BertConfig, BertModel, BertPreTrainedModel
-from transformers.modeling_outputs import BaseModelOutputWithPoolingAndCrossAttentions
-
-
-class BertModelWarper(nn.Module):
- def __init__(self, bert_model):
- super().__init__()
- # self.bert = bert_modelc
-
- self.config = bert_model.config
- self.embeddings = bert_model.embeddings
- self.encoder = bert_model.encoder
- self.pooler = bert_model.pooler
-
- self.get_extended_attention_mask = bert_model.get_extended_attention_mask
- self.invert_attention_mask = bert_model.invert_attention_mask
- self.get_head_mask = bert_model.get_head_mask
-
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- token_type_ids=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- past_key_values=None,
- use_cache=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- r"""
- encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`):
- Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
- the model is configured as a decoder.
- encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
- Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
- the cross-attention if the model is configured as a decoder. Mask values selected in ``[0, 1]``:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
- past_key_values (:obj:`tuple(tuple(torch.FloatTensor))` of length :obj:`config.n_layers` with each tuple having 4 tensors of shape :obj:`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
- Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
-
- If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids`
- (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)`
- instead of all :obj:`decoder_input_ids` of shape :obj:`(batch_size, sequence_length)`.
- use_cache (:obj:`bool`, `optional`):
- If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up
- decoding (see :obj:`past_key_values`).
- """
- output_attentions = (
- output_attentions if output_attentions is not None else self.config.output_attentions
- )
- output_hidden_states = (
- output_hidden_states
- if output_hidden_states is not None
- else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- if self.config.is_decoder:
- use_cache = use_cache if use_cache is not None else self.config.use_cache
- else:
- use_cache = False
-
- if input_ids is not None and inputs_embeds is not None:
- raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
- elif input_ids is not None:
- input_shape = input_ids.size()
- batch_size, seq_length = input_shape
- elif inputs_embeds is not None:
- input_shape = inputs_embeds.size()[:-1]
- batch_size, seq_length = input_shape
- else:
- raise ValueError("You have to specify either input_ids or inputs_embeds")
-
- device = input_ids.device if input_ids is not None else inputs_embeds.device
-
- # past_key_values_length
- past_key_values_length = (
- past_key_values[0][0].shape[2] if past_key_values is not None else 0
- )
-
- if attention_mask is None:
- attention_mask = torch.ones(
- ((batch_size, seq_length + past_key_values_length)), device=device
- )
- if token_type_ids is None:
- token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device)
-
- # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
- # ourselves in which case we just need to make it broadcastable to all heads.
- extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(
- attention_mask, input_shape, device
- )
-
- # If a 2D or 3D attention mask is provided for the cross-attention
- # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
- if self.config.is_decoder and encoder_hidden_states is not None:
- encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size()
- encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
- if encoder_attention_mask is None:
- encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device)
- encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask)
- else:
- encoder_extended_attention_mask = None
- # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO':
- # import ipdb; ipdb.set_trace()
-
- # Prepare head mask if needed
- # 1.0 in head_mask indicate we keep the head
- # attention_probs has shape bsz x n_heads x N x N
- # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
- # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
- head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
-
- embedding_output = self.embeddings(
- input_ids=input_ids,
- position_ids=position_ids,
- token_type_ids=token_type_ids,
- inputs_embeds=inputs_embeds,
- past_key_values_length=past_key_values_length,
- )
-
- encoder_outputs = self.encoder(
- embedding_output,
- attention_mask=extended_attention_mask,
- head_mask=head_mask,
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=encoder_extended_attention_mask,
- past_key_values=past_key_values,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
- sequence_output = encoder_outputs[0]
- pooled_output = self.pooler(sequence_output) if self.pooler is not None else None
-
- if not return_dict:
- return (sequence_output, pooled_output) + encoder_outputs[1:]
-
- return BaseModelOutputWithPoolingAndCrossAttentions(
- last_hidden_state=sequence_output,
- pooler_output=pooled_output,
- past_key_values=encoder_outputs.past_key_values,
- hidden_states=encoder_outputs.hidden_states,
- attentions=encoder_outputs.attentions,
- cross_attentions=encoder_outputs.cross_attentions,
- )
-
-
-class TextEncoderShell(nn.Module):
- def __init__(self, text_encoder):
- super().__init__()
- self.text_encoder = text_encoder
- self.config = self.text_encoder.config
-
- def forward(self, **kw):
- # feed into text encoder
- return self.text_encoder(**kw)
-
-
-def generate_masks_with_special_tokens(tokenized, special_tokens_list, tokenizer):
- """Generate attention mask between each pair of special tokens
- Args:
- input_ids (torch.Tensor): input ids. Shape: [bs, num_token]
- special_tokens_mask (list): special tokens mask.
- Returns:
- torch.Tensor: attention mask between each special tokens.
- """
- input_ids = tokenized["input_ids"]
- bs, num_token = input_ids.shape
- # special_tokens_mask: bs, num_token. 1 for special tokens. 0 for normal tokens
- special_tokens_mask = torch.zeros((bs, num_token), device=input_ids.device).bool()
- for special_token in special_tokens_list:
- special_tokens_mask |= input_ids == special_token
-
- # idxs: each row is a list of indices of special tokens
- idxs = torch.nonzero(special_tokens_mask)
-
- # generate attention mask and positional ids
- attention_mask = (
- torch.eye(num_token, device=input_ids.device).bool().unsqueeze(0).repeat(bs, 1, 1)
- )
- position_ids = torch.zeros((bs, num_token), device=input_ids.device)
- previous_col = 0
- for i in range(idxs.shape[0]):
- row, col = idxs[i]
- if (col == 0) or (col == num_token - 1):
- attention_mask[row, col, col] = True
- position_ids[row, col] = 0
- else:
- attention_mask[row, previous_col + 1 : col + 1, previous_col + 1 : col + 1] = True
- position_ids[row, previous_col + 1 : col + 1] = torch.arange(
- 0, col - previous_col, device=input_ids.device
- )
-
- previous_col = col
-
- # # padding mask
- # padding_mask = tokenized['attention_mask']
- # attention_mask = attention_mask & padding_mask.unsqueeze(1).bool() & padding_mask.unsqueeze(2).bool()
-
- return attention_mask, position_ids.to(torch.long)
-
-
-def generate_masks_with_special_tokens_and_transfer_map(tokenized, special_tokens_list, tokenizer):
- """Generate attention mask between each pair of special tokens
- Args:
- input_ids (torch.Tensor): input ids. Shape: [bs, num_token]
- special_tokens_mask (list): special tokens mask.
- Returns:
- torch.Tensor: attention mask between each special tokens.
- """
- input_ids = tokenized["input_ids"]
- bs, num_token = input_ids.shape
- # special_tokens_mask: bs, num_token. 1 for special tokens. 0 for normal tokens
- special_tokens_mask = torch.zeros((bs, num_token), device=input_ids.device).bool()
- for special_token in special_tokens_list:
- special_tokens_mask |= input_ids == special_token
-
- # idxs: each row is a list of indices of special tokens
- idxs = torch.nonzero(special_tokens_mask)
-
- # generate attention mask and positional ids
- attention_mask = (
- torch.eye(num_token, device=input_ids.device).bool().unsqueeze(0).repeat(bs, 1, 1)
- )
- position_ids = torch.zeros((bs, num_token), device=input_ids.device)
- cate_to_token_mask_list = [[] for _ in range(bs)]
- previous_col = 0
- for i in range(idxs.shape[0]):
- row, col = idxs[i]
- if (col == 0) or (col == num_token - 1):
- attention_mask[row, col, col] = True
- position_ids[row, col] = 0
- else:
- attention_mask[row, previous_col + 1 : col + 1, previous_col + 1 : col + 1] = True
- position_ids[row, previous_col + 1 : col + 1] = torch.arange(
- 0, col - previous_col, device=input_ids.device
- )
- c2t_maski = torch.zeros((num_token), device=input_ids.device).bool()
- c2t_maski[previous_col + 1 : col] = True
- cate_to_token_mask_list[row].append(c2t_maski)
- previous_col = col
-
- cate_to_token_mask_list = [
- torch.stack(cate_to_token_mask_listi, dim=0)
- for cate_to_token_mask_listi in cate_to_token_mask_list
- ]
-
- # # padding mask
- # padding_mask = tokenized['attention_mask']
- # attention_mask = attention_mask & padding_mask.unsqueeze(1).bool() & padding_mask.unsqueeze(2).bool()
-
- return attention_mask, position_ids.to(torch.long), cate_to_token_mask_list
diff --git a/spaces/shivammehta25/Diff-TTSG/pymo/__init__.py b/spaces/shivammehta25/Diff-TTSG/pymo/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/shiwan10000/CodeFormer/CodeFormer/basicsr/ops/dcn/deform_conv.py b/spaces/shiwan10000/CodeFormer/CodeFormer/basicsr/ops/dcn/deform_conv.py
deleted file mode 100644
index 734154f9ed9447d585eae7df6886acb136f8a3cf..0000000000000000000000000000000000000000
--- a/spaces/shiwan10000/CodeFormer/CodeFormer/basicsr/ops/dcn/deform_conv.py
+++ /dev/null
@@ -1,377 +0,0 @@
-import math
-import torch
-from torch import nn as nn
-from torch.autograd import Function
-from torch.autograd.function import once_differentiable
-from torch.nn import functional as F
-from torch.nn.modules.utils import _pair, _single
-
-try:
- from . import deform_conv_ext
-except ImportError:
- import os
- BASICSR_JIT = os.getenv('BASICSR_JIT')
- if BASICSR_JIT == 'True':
- from torch.utils.cpp_extension import load
- module_path = os.path.dirname(__file__)
- deform_conv_ext = load(
- 'deform_conv',
- sources=[
- os.path.join(module_path, 'src', 'deform_conv_ext.cpp'),
- os.path.join(module_path, 'src', 'deform_conv_cuda.cpp'),
- os.path.join(module_path, 'src', 'deform_conv_cuda_kernel.cu'),
- ],
- )
-
-
-class DeformConvFunction(Function):
-
- @staticmethod
- def forward(ctx,
- input,
- offset,
- weight,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- deformable_groups=1,
- im2col_step=64):
- if input is not None and input.dim() != 4:
- raise ValueError(f'Expected 4D tensor as input, got {input.dim()}' 'D tensor instead.')
- ctx.stride = _pair(stride)
- ctx.padding = _pair(padding)
- ctx.dilation = _pair(dilation)
- ctx.groups = groups
- ctx.deformable_groups = deformable_groups
- ctx.im2col_step = im2col_step
-
- ctx.save_for_backward(input, offset, weight)
-
- output = input.new_empty(DeformConvFunction._output_size(input, weight, ctx.padding, ctx.dilation, ctx.stride))
-
- ctx.bufs_ = [input.new_empty(0), input.new_empty(0)] # columns, ones
-
- if not input.is_cuda:
- raise NotImplementedError
- else:
- cur_im2col_step = min(ctx.im2col_step, input.shape[0])
- assert (input.shape[0] % cur_im2col_step) == 0, 'im2col step must divide batchsize'
- deform_conv_ext.deform_conv_forward(input, weight,
- offset, output, ctx.bufs_[0], ctx.bufs_[1], weight.size(3),
- weight.size(2), ctx.stride[1], ctx.stride[0], ctx.padding[1],
- ctx.padding[0], ctx.dilation[1], ctx.dilation[0], ctx.groups,
- ctx.deformable_groups, cur_im2col_step)
- return output
-
- @staticmethod
- @once_differentiable
- def backward(ctx, grad_output):
- input, offset, weight = ctx.saved_tensors
-
- grad_input = grad_offset = grad_weight = None
-
- if not grad_output.is_cuda:
- raise NotImplementedError
- else:
- cur_im2col_step = min(ctx.im2col_step, input.shape[0])
- assert (input.shape[0] % cur_im2col_step) == 0, 'im2col step must divide batchsize'
-
- if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]:
- grad_input = torch.zeros_like(input)
- grad_offset = torch.zeros_like(offset)
- deform_conv_ext.deform_conv_backward_input(input, offset, grad_output, grad_input,
- grad_offset, weight, ctx.bufs_[0], weight.size(3),
- weight.size(2), ctx.stride[1], ctx.stride[0], ctx.padding[1],
- ctx.padding[0], ctx.dilation[1], ctx.dilation[0], ctx.groups,
- ctx.deformable_groups, cur_im2col_step)
-
- if ctx.needs_input_grad[2]:
- grad_weight = torch.zeros_like(weight)
- deform_conv_ext.deform_conv_backward_parameters(input, offset, grad_output, grad_weight,
- ctx.bufs_[0], ctx.bufs_[1], weight.size(3),
- weight.size(2), ctx.stride[1], ctx.stride[0],
- ctx.padding[1], ctx.padding[0], ctx.dilation[1],
- ctx.dilation[0], ctx.groups, ctx.deformable_groups, 1,
- cur_im2col_step)
-
- return (grad_input, grad_offset, grad_weight, None, None, None, None, None)
-
- @staticmethod
- def _output_size(input, weight, padding, dilation, stride):
- channels = weight.size(0)
- output_size = (input.size(0), channels)
- for d in range(input.dim() - 2):
- in_size = input.size(d + 2)
- pad = padding[d]
- kernel = dilation[d] * (weight.size(d + 2) - 1) + 1
- stride_ = stride[d]
- output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1, )
- if not all(map(lambda s: s > 0, output_size)):
- raise ValueError('convolution input is too small (output would be ' f'{"x".join(map(str, output_size))})')
- return output_size
-
-
-class ModulatedDeformConvFunction(Function):
-
- @staticmethod
- def forward(ctx,
- input,
- offset,
- mask,
- weight,
- bias=None,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- deformable_groups=1):
- ctx.stride = stride
- ctx.padding = padding
- ctx.dilation = dilation
- ctx.groups = groups
- ctx.deformable_groups = deformable_groups
- ctx.with_bias = bias is not None
- if not ctx.with_bias:
- bias = input.new_empty(1) # fake tensor
- if not input.is_cuda:
- raise NotImplementedError
- if weight.requires_grad or mask.requires_grad or offset.requires_grad \
- or input.requires_grad:
- ctx.save_for_backward(input, offset, mask, weight, bias)
- output = input.new_empty(ModulatedDeformConvFunction._infer_shape(ctx, input, weight))
- ctx._bufs = [input.new_empty(0), input.new_empty(0)]
- deform_conv_ext.modulated_deform_conv_forward(input, weight, bias, ctx._bufs[0], offset, mask, output,
- ctx._bufs[1], weight.shape[2], weight.shape[3], ctx.stride,
- ctx.stride, ctx.padding, ctx.padding, ctx.dilation, ctx.dilation,
- ctx.groups, ctx.deformable_groups, ctx.with_bias)
- return output
-
- @staticmethod
- @once_differentiable
- def backward(ctx, grad_output):
- if not grad_output.is_cuda:
- raise NotImplementedError
- input, offset, mask, weight, bias = ctx.saved_tensors
- grad_input = torch.zeros_like(input)
- grad_offset = torch.zeros_like(offset)
- grad_mask = torch.zeros_like(mask)
- grad_weight = torch.zeros_like(weight)
- grad_bias = torch.zeros_like(bias)
- deform_conv_ext.modulated_deform_conv_backward(input, weight, bias, ctx._bufs[0], offset, mask, ctx._bufs[1],
- grad_input, grad_weight, grad_bias, grad_offset, grad_mask,
- grad_output, weight.shape[2], weight.shape[3], ctx.stride,
- ctx.stride, ctx.padding, ctx.padding, ctx.dilation, ctx.dilation,
- ctx.groups, ctx.deformable_groups, ctx.with_bias)
- if not ctx.with_bias:
- grad_bias = None
-
- return (grad_input, grad_offset, grad_mask, grad_weight, grad_bias, None, None, None, None, None)
-
- @staticmethod
- def _infer_shape(ctx, input, weight):
- n = input.size(0)
- channels_out = weight.size(0)
- height, width = input.shape[2:4]
- kernel_h, kernel_w = weight.shape[2:4]
- height_out = (height + 2 * ctx.padding - (ctx.dilation * (kernel_h - 1) + 1)) // ctx.stride + 1
- width_out = (width + 2 * ctx.padding - (ctx.dilation * (kernel_w - 1) + 1)) // ctx.stride + 1
- return n, channels_out, height_out, width_out
-
-
-deform_conv = DeformConvFunction.apply
-modulated_deform_conv = ModulatedDeformConvFunction.apply
-
-
-class DeformConv(nn.Module):
-
- def __init__(self,
- in_channels,
- out_channels,
- kernel_size,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- deformable_groups=1,
- bias=False):
- super(DeformConv, self).__init__()
-
- assert not bias
- assert in_channels % groups == 0, \
- f'in_channels {in_channels} is not divisible by groups {groups}'
- assert out_channels % groups == 0, \
- f'out_channels {out_channels} is not divisible ' \
- f'by groups {groups}'
-
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.kernel_size = _pair(kernel_size)
- self.stride = _pair(stride)
- self.padding = _pair(padding)
- self.dilation = _pair(dilation)
- self.groups = groups
- self.deformable_groups = deformable_groups
- # enable compatibility with nn.Conv2d
- self.transposed = False
- self.output_padding = _single(0)
-
- self.weight = nn.Parameter(torch.Tensor(out_channels, in_channels // self.groups, *self.kernel_size))
-
- self.reset_parameters()
-
- def reset_parameters(self):
- n = self.in_channels
- for k in self.kernel_size:
- n *= k
- stdv = 1. / math.sqrt(n)
- self.weight.data.uniform_(-stdv, stdv)
-
- def forward(self, x, offset):
- # To fix an assert error in deform_conv_cuda.cpp:128
- # input image is smaller than kernel
- input_pad = (x.size(2) < self.kernel_size[0] or x.size(3) < self.kernel_size[1])
- if input_pad:
- pad_h = max(self.kernel_size[0] - x.size(2), 0)
- pad_w = max(self.kernel_size[1] - x.size(3), 0)
- x = F.pad(x, (0, pad_w, 0, pad_h), 'constant', 0).contiguous()
- offset = F.pad(offset, (0, pad_w, 0, pad_h), 'constant', 0).contiguous()
- out = deform_conv(x, offset, self.weight, self.stride, self.padding, self.dilation, self.groups,
- self.deformable_groups)
- if input_pad:
- out = out[:, :, :out.size(2) - pad_h, :out.size(3) - pad_w].contiguous()
- return out
-
-
-class DeformConvPack(DeformConv):
- """A Deformable Conv Encapsulation that acts as normal Conv layers.
-
- Args:
- in_channels (int): Same as nn.Conv2d.
- out_channels (int): Same as nn.Conv2d.
- kernel_size (int or tuple[int]): Same as nn.Conv2d.
- stride (int or tuple[int]): Same as nn.Conv2d.
- padding (int or tuple[int]): Same as nn.Conv2d.
- dilation (int or tuple[int]): Same as nn.Conv2d.
- groups (int): Same as nn.Conv2d.
- bias (bool or str): If specified as `auto`, it will be decided by the
- norm_cfg. Bias will be set as True if norm_cfg is None, otherwise
- False.
- """
-
- _version = 2
-
- def __init__(self, *args, **kwargs):
- super(DeformConvPack, self).__init__(*args, **kwargs)
-
- self.conv_offset = nn.Conv2d(
- self.in_channels,
- self.deformable_groups * 2 * self.kernel_size[0] * self.kernel_size[1],
- kernel_size=self.kernel_size,
- stride=_pair(self.stride),
- padding=_pair(self.padding),
- dilation=_pair(self.dilation),
- bias=True)
- self.init_offset()
-
- def init_offset(self):
- self.conv_offset.weight.data.zero_()
- self.conv_offset.bias.data.zero_()
-
- def forward(self, x):
- offset = self.conv_offset(x)
- return deform_conv(x, offset, self.weight, self.stride, self.padding, self.dilation, self.groups,
- self.deformable_groups)
-
-
-class ModulatedDeformConv(nn.Module):
-
- def __init__(self,
- in_channels,
- out_channels,
- kernel_size,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- deformable_groups=1,
- bias=True):
- super(ModulatedDeformConv, self).__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.kernel_size = _pair(kernel_size)
- self.stride = stride
- self.padding = padding
- self.dilation = dilation
- self.groups = groups
- self.deformable_groups = deformable_groups
- self.with_bias = bias
- # enable compatibility with nn.Conv2d
- self.transposed = False
- self.output_padding = _single(0)
-
- self.weight = nn.Parameter(torch.Tensor(out_channels, in_channels // groups, *self.kernel_size))
- if bias:
- self.bias = nn.Parameter(torch.Tensor(out_channels))
- else:
- self.register_parameter('bias', None)
- self.init_weights()
-
- def init_weights(self):
- n = self.in_channels
- for k in self.kernel_size:
- n *= k
- stdv = 1. / math.sqrt(n)
- self.weight.data.uniform_(-stdv, stdv)
- if self.bias is not None:
- self.bias.data.zero_()
-
- def forward(self, x, offset, mask):
- return modulated_deform_conv(x, offset, mask, self.weight, self.bias, self.stride, self.padding, self.dilation,
- self.groups, self.deformable_groups)
-
-
-class ModulatedDeformConvPack(ModulatedDeformConv):
- """A ModulatedDeformable Conv Encapsulation that acts as normal Conv layers.
-
- Args:
- in_channels (int): Same as nn.Conv2d.
- out_channels (int): Same as nn.Conv2d.
- kernel_size (int or tuple[int]): Same as nn.Conv2d.
- stride (int or tuple[int]): Same as nn.Conv2d.
- padding (int or tuple[int]): Same as nn.Conv2d.
- dilation (int or tuple[int]): Same as nn.Conv2d.
- groups (int): Same as nn.Conv2d.
- bias (bool or str): If specified as `auto`, it will be decided by the
- norm_cfg. Bias will be set as True if norm_cfg is None, otherwise
- False.
- """
-
- _version = 2
-
- def __init__(self, *args, **kwargs):
- super(ModulatedDeformConvPack, self).__init__(*args, **kwargs)
-
- self.conv_offset = nn.Conv2d(
- self.in_channels,
- self.deformable_groups * 3 * self.kernel_size[0] * self.kernel_size[1],
- kernel_size=self.kernel_size,
- stride=_pair(self.stride),
- padding=_pair(self.padding),
- dilation=_pair(self.dilation),
- bias=True)
- self.init_weights()
-
- def init_weights(self):
- super(ModulatedDeformConvPack, self).init_weights()
- if hasattr(self, 'conv_offset'):
- self.conv_offset.weight.data.zero_()
- self.conv_offset.bias.data.zero_()
-
- def forward(self, x):
- out = self.conv_offset(x)
- o1, o2, mask = torch.chunk(out, 3, dim=1)
- offset = torch.cat((o1, o2), dim=1)
- mask = torch.sigmoid(mask)
- return modulated_deform_conv(x, offset, mask, self.weight, self.bias, self.stride, self.padding, self.dilation,
- self.groups, self.deformable_groups)
diff --git a/spaces/silencewing/server/youyou/.history/math_20230613232841.html b/spaces/silencewing/server/youyou/.history/math_20230613232841.html
deleted file mode 100644
index 8689433893a8cd567aee4a575ecf3952cfa62f50..0000000000000000000000000000000000000000
--- a/spaces/silencewing/server/youyou/.history/math_20230613232841.html
+++ /dev/null
@@ -1,235 +0,0 @@
-
-
-
-
-
-
-
-
-
- Document
-
-
-
-
-
"""
-
-STYLE_CONFIG_OLD = f"""
-
-"""
-
-with open('./style.css') as f:
- STYLE_CONFIG_NEW = f.read()
-STYLE_CONFIG = STYLE_CONFIG_OLD + ''.format(STYLE_CONFIG_NEW)
-
-LABEL_COLORS = {'problem':'#0C8888',
- 'test':'#FF33C1',
- 'treatment':'#3196D4',
- 'multi':'#ccfff5',
- 'multi-tissue_structure':'#8dd8b4',
- 'cell':'#ffe6cc',
- 'organism':'#ffddcc',
- 'gene_or_gene_product':'#fff0b3',
- 'organ':'#e6e600',
- 'simple_chemical':'#ffd699',
-
- 'per':'#0C8888', 'pers':'#0C8888','person':'#0C8888',
- 'org':'#FF33C1',
- 'misc': '#3196D4', 'mis': '#3196D4',
- 'loc':'#5B00A3', 'location':'#5B00A3',
-
-
- 'drug':'#33BBFF',
- 'diagnosis':'#b5a1c9',
- 'maybe':'#FFB5C5',
- 'lab_result':'#3abd80',
- 'negated':'#CD3700',
- 'name':'#C0FF3E',
- 'lab_name':'#698B22',
- 'modifier':'#8B475D',
- 'symptom_name':'#CDB7B5',
- 'section_name':'#8B7D7B',
- 'procedure_name':'#48D1CC',
- 'grading':"#8c61e8",
- 'size':"#746b87",
- 'organism_substance':'#ffaa80',
- 'gender':'#ffacb7',
- 'age':'#ffe0ac',
- 'date': '#a6b1e1'
- }
diff --git a/spaces/sparswan/SP-06-SL-AI-Image-Music-Video-UI-UX-URL/app.py b/spaces/sparswan/SP-06-SL-AI-Image-Music-Video-UI-UX-URL/app.py
deleted file mode 100644
index 0f4298365bc4f58d285202fb9442e12805d2db95..0000000000000000000000000000000000000000
--- a/spaces/sparswan/SP-06-SL-AI-Image-Music-Video-UI-UX-URL/app.py
+++ /dev/null
@@ -1,45 +0,0 @@
-import streamlit as st
-import gradio as gr
-import IPython
-import streamlit as st
-import streamlit.components.v1 as components
-from IPython.display import IFrame
-
-src='' # URL parameter to change the iframe url
-def SetIframeURL(option_selected):
- if (option_selected=='Collager'):
- src='https://www.artbreeder.com/'
- if (option_selected=='Midjourney'):
- src='https://www.midjourney.com/'
- if (option_selected=='DreamStudio'):
- src='https://beta.dreamstudio.ai/'
- if (option_selected=='NightCafe'):
- src='https://creator.nightcafe.studio/'
- if (option_selected=='RunwayML'):
- src='https://app.runwayml.com/'
- if (option_selected=='ArtFromTextandImages'):
- src='https://huggingface.co/spaces/awacke1/Art-from-Text-and-Images'
- if (option_selected=='Boomy'):
- src='https://boomy.com/'
-
- width = st.sidebar.slider("Width", 200, 1500, 800, 100)
- height = st.sidebar.slider("Height", 200, 1500, 900, 100)
- st.components.v1.iframe(src, width, height, scrolling=True)
-
-try:
- options = ['Midjourney', 'RunwayML', 'Boomy']
- query_params = st.experimental_get_query_params()
- query_option = query_params['option'][0] #throws an exception when visiting http://host:port
- option_selected = st.sidebar.selectbox('Pick option', options, index=options.index(query_option))
- if option_selected:
- st.experimental_set_query_params(option=option_selected)
- SetIframeURL(option_selected)
-except:
- options = ['Midjourney', 'RunwayML', 'Boomy']
- st.experimental_set_query_params(option=options[1]) # defaults to 1
- query_params = st.experimental_get_query_params()
- query_option = query_params['option'][0]
- option_selected = st.sidebar.selectbox('Pick option', options, index=options.index(query_option))
- if option_selected:
- st.experimental_set_query_params(option=option_selected)
- SetIframeURL(option_selected)
\ No newline at end of file
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_cluster_faiss.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_cluster_faiss.py
deleted file mode 100644
index 632a69e9f4bd98d33abb689c15557c818d0e35ea..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_cluster_faiss.py
+++ /dev/null
@@ -1,210 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import gc
-import os
-import os.path as osp
-import random
-import numpy as np
-import tqdm
-import torch
-
-from collections import namedtuple
-
-import faiss
-
-import fairseq
-import soundfile as sf
-
-
-def get_parser():
- parser = argparse.ArgumentParser(
- description="compute kmeans codebook from kaldi-computed feats"
- )
- # fmt: off
- parser.add_argument('data', help='location of tsv files')
- parser.add_argument('--save-dir', help='where to save the output', required=True)
- parser.add_argument('--checkpoint', type=str, help='checkpoint for wav2vec model (if using wav2vec features)', required=True)
- parser.add_argument('--sample-pct', '-r', type=float, help='percentage of timesteps to sample', default=0)
- parser.add_argument('--layer', '-l', type=int, help='which layer to read', default=14)
- parser.add_argument('--faiss-specs', '-f', type=str,
- help='faiss index specs; separated by space '
- 'format is: PCAx_NORM_CLUSx_SPHERICAL -> '
- 'PCAx if exists first apply PCA '
- 'NORM if exists, normalize the vector by L2 norm '
- 'CLUSx must exist, cluster to x clusters '
- 'SPEHRICAL if exists, apply spherical kmeans',
- default='l2')
- # fmt: on
-
- return parser
-
-
-faiss_spec = namedtuple("faiss_spec", ["pca", "norm", "n_clus", "sphere", "spec_str"])
-
-
-def parse_faiss_specs(specs_str):
- specs = []
- for ss in specs_str.split():
- comps = ss.split("_")
- pca = 0
- norm = False
- n_clus = 0
- sphere = False
- for c in comps:
- if c.startswith("PCA"):
- pca = int(c[3:])
- elif c == "NORM":
- norm = True
- elif c.startswith("CLUS"):
- n_clus = int(c[4:])
- elif c == "SPHERICAL":
- sphere = True
- assert n_clus > 0
- specs.append(
- faiss_spec(pca=pca, norm=norm, n_clus=n_clus, sphere=sphere, spec_str=ss)
- )
- return specs
-
-
-class Wav2VecFeatureReader(object):
- def __init__(self, cp_file, layer):
- state = fairseq.checkpoint_utils.load_checkpoint_to_cpu(cp_file)
-
- self.layer = layer
-
- if "cfg" in state:
- w2v_args = state["cfg"]
- task = fairseq.tasks.setup_task(w2v_args.task)
- model = task.build_model(w2v_args.model)
- else:
- w2v_args = state["args"]
- task = fairseq.tasks.setup_task(w2v_args)
- model = task.build_model(w2v_args)
- model.load_state_dict(state["model"], strict=True)
- model.eval()
- model.cuda()
- self.model = model
-
- def read_audio(self, fname):
- """Load an audio file and return PCM along with the sample rate"""
- wav, sr = sf.read(fname)
- assert sr == 16e3
-
- return wav
-
- def get_feats(self, loc):
- x = self.read_audio(loc)
- with torch.no_grad():
- source = torch.from_numpy(x).view(1, -1).float().cuda()
- res = self.model(
- source=source, mask=False, features_only=True, layer=self.layer
- )
- return res["layer_results"][self.layer][0].squeeze(1)
-
-
-def get_iterator(args):
- with open(args.data, "r") as fp:
- lines = fp.read().split("\n")
- root = lines.pop(0).strip()
- files = [osp.join(root, line.split("\t")[0]) for line in lines if len(line) > 0]
-
- if getattr(args, "sample_pct", 0) > 0:
- files = random.sample(files, int(args.sample_pct * len(files)))
- num = len(files)
- reader = Wav2VecFeatureReader(args.checkpoint, args.layer)
-
- def iterate():
- for fname in files:
- feats = reader.get_feats(fname)
- yield feats.cpu().numpy()
-
- return iterate, num
-
-
-def main():
- parser = get_parser()
- args = parser.parse_args()
-
- faiss_specs = parse_faiss_specs(args.faiss_specs)
- print("Faiss Specs:", faiss_specs)
-
- feat_path = osp.join(args.save_dir, "features")
- if osp.exists(feat_path + ".npy"):
- feats = np.load(feat_path + ".npy")
- else:
- generator, num = get_iterator(args)
- iterator = generator()
-
- feats = []
- for f in tqdm.tqdm(iterator, total=num):
- feats.append(f)
-
- del iterator
- del generator
-
- feats = np.concatenate(feats)
-
- print(feats.shape)
-
- os.makedirs(args.save_dir, exist_ok=True)
- # np.save(feat_path, feats)
-
- gc.collect()
- torch.cuda.empty_cache()
-
- reload = False
- for spec in faiss_specs:
- print("Processing spec", spec)
-
- if reload:
- print("Reloading...")
- del feats
- gc.collect()
- feats = np.load(feat_path + ".npy")
-
- save_path = osp.join(args.save_dir, spec.spec_str)
- os.makedirs(save_path, exist_ok=True)
- d = feats.shape[-1]
- x = feats
- if spec.pca > 0:
- print("Computing PCA")
- pca = faiss.PCAMatrix(d, spec.pca)
- pca.train(x)
- d = spec.pca
- b = faiss.vector_to_array(pca.b)
- A = faiss.vector_to_array(pca.A).reshape(pca.d_out, pca.d_in)
- np.save(osp.join(save_path, "pca_A"), A.T)
- np.save(osp.join(save_path, "pca_b"), b)
- print("Applying PCA")
- x = pca.apply_py(x)
-
- if spec.norm:
- reload = spec.pca <= 0
- print("Normalizing")
- faiss.normalize_L2(x)
-
- print("Computing kmeans")
- kmeans = faiss.Kmeans(
- d,
- spec.n_clus,
- niter=50,
- verbose=True,
- spherical=spec.sphere,
- max_points_per_centroid=feats.shape[0],
- gpu=True,
- nredo=3,
- )
- kmeans.train(x)
- np.save(osp.join(save_path, "centroids"), kmeans.centroids)
- del kmeans
- del x
- gc.collect()
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/wav2vec/wav2vec2.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/wav2vec/wav2vec2.py
deleted file mode 100644
index 714fd3ab50443b8d15715b1cf5abd4eb517298c4..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/wav2vec/wav2vec2.py
+++ /dev/null
@@ -1,1016 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-from dataclasses import dataclass, field
-from typing import List, Tuple
-
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from fairseq import utils
-from fairseq.data.data_utils import compute_mask_indices
-from fairseq.dataclass import ChoiceEnum, FairseqDataclass
-from fairseq.models import BaseFairseqModel, register_model
-from fairseq.modules import (
- Fp32GroupNorm,
- Fp32LayerNorm,
- GradMultiply,
- GumbelVectorQuantizer,
- LayerNorm,
- MultiheadAttention,
- SamePad,
- TransposeLast,
-)
-from fairseq.modules.transformer_sentence_encoder import init_bert_params
-from fairseq.utils import buffered_arange, index_put, is_xla_tensor
-
-
-EXTRACTOR_MODE_CHOICES = ChoiceEnum(["default", "layer_norm"])
-MASKING_DISTRIBUTION_CHOICES = ChoiceEnum(["static", "uniform", "normal", "poisson"])
-
-
-@dataclass
-class Wav2Vec2Config(FairseqDataclass):
- extractor_mode: EXTRACTOR_MODE_CHOICES = field(
- default="default",
- metadata={
- "help": "mode for feature extractor. default has a single group norm with d "
- "groups in the first conv block, whereas layer_norm has layer norms in "
- "every block (meant to use with normalize=True)"
- },
- )
- encoder_layers: int = field(
- default=12, metadata={"help": "num encoder layers in the transformer"}
- )
- encoder_embed_dim: int = field(
- default=768, metadata={"help": "encoder embedding dimension"}
- )
- encoder_ffn_embed_dim: int = field(
- default=3072, metadata={"help": "encoder embedding dimension for FFN"}
- )
- encoder_attention_heads: int = field(
- default=12, metadata={"help": "num encoder attention heads"}
- )
- activation_fn: ChoiceEnum(utils.get_available_activation_fns()) = field(
- default="gelu", metadata={"help": "activation function to use"}
- )
-
- # dropouts
- dropout: float = field(
- default=0.1, metadata={"help": "dropout probability for the transformer"}
- )
- attention_dropout: float = field(
- default=0.1, metadata={"help": "dropout probability for attention weights"}
- )
- activation_dropout: float = field(
- default=0.0, metadata={"help": "dropout probability after activation in FFN"}
- )
- encoder_layerdrop: float = field(
- default=0.0, metadata={"help": "probability of dropping a tarnsformer layer"}
- )
- dropout_input: float = field(
- default=0.0,
- metadata={"help": "dropout to apply to the input (after feat extr)"},
- )
- dropout_features: float = field(
- default=0.0,
- metadata={"help": "dropout to apply to the features (after feat extr)"},
- )
-
- final_dim: int = field(
- default=0,
- metadata={
- "help": "project final representations and targets to this many dimensions."
- "set to encoder_embed_dim is <= 0"
- },
- )
- layer_norm_first: bool = field(
- default=False, metadata={"help": "apply layernorm first in the transformer"}
- )
- conv_feature_layers: str = field(
- default="[(512, 10, 5)] + [(512, 3, 2)] * 4 + [(512,2,2)] + [(512,2,2)]",
- metadata={
- "help": "string describing convolutional feature extraction layers in form of a python list that contains "
- "[(dim, kernel_size, stride), ...]"
- },
- )
- conv_bias: bool = field(
- default=False, metadata={"help": "include bias in conv encoder"}
- )
- logit_temp: float = field(
- default=0.1, metadata={"help": "temperature to divide logits by"}
- )
- quantize_targets: bool = field(
- default=False, metadata={"help": "use quantized targets"}
- )
- quantize_input: bool = field(
- default=False, metadata={"help": "use quantized inputs"}
- )
- same_quantizer: bool = field(
- default=False, metadata={"help": "use same quantizer for inputs and targets"}
- )
- target_glu: bool = field(
- default=False, metadata={"help": "adds projection + glu to targets"}
- )
- feature_grad_mult: float = field(
- default=1.0, metadata={"help": "multiply feature extractor var grads by this"}
- )
- quantizer_depth: int = field(
- default=1,
- metadata={"help": "number of quantizer layers"},
- )
- quantizer_factor: int = field(
- default=3,
- metadata={
- "help": "dimensionality increase for inner quantizer layers (if depth > 1)"
- },
- )
- latent_vars: int = field(
- default=320,
- metadata={"help": "number of latent variables V in each group of the codebook"},
- )
- latent_groups: int = field(
- default=2,
- metadata={"help": "number of groups G of latent variables in the codebook"},
- )
- latent_dim: int = field(
- default=0,
- metadata={
- "help": "if > 0, uses this dimensionality for latent variables. "
- "otherwise uses final_dim / latent_groups"
- },
- )
-
- # masking
- mask_length: int = field(default=10, metadata={"help": "mask length"})
- mask_prob: float = field(
- default=0.65, metadata={"help": "probability of replacing a token with mask"}
- )
- mask_selection: MASKING_DISTRIBUTION_CHOICES = field(
- default="static", metadata={"help": "how to choose mask length"}
- )
- mask_other: float = field(
- default=0,
- metadata={
- "help": "secondary mask argument (used for more complex distributions), "
- "see help in compute_mask_indices"
- },
- )
- no_mask_overlap: bool = field(
- default=False, metadata={"help": "whether to allow masks to overlap"}
- )
- mask_min_space: int = field(
- default=1,
- metadata={"help": "min space between spans (if no overlap is enabled)"},
- )
-
- # channel masking
- mask_channel_length: int = field(
- default=10, metadata={"help": "length of the mask for features (channels)"}
- )
- mask_channel_prob: float = field(
- default=0.0, metadata={"help": "probability of replacing a feature with 0"}
- )
- mask_channel_before: bool = False
- mask_channel_selection: MASKING_DISTRIBUTION_CHOICES = field(
- default="static",
- metadata={"help": "how to choose mask length for channel masking"},
- )
- mask_channel_other: float = field(
- default=0,
- metadata={
- "help": "secondary mask argument (used for more complex distributions), "
- "see help in compute_mask_indicesh"
- },
- )
- no_mask_channel_overlap: bool = field(
- default=False, metadata={"help": "whether to allow channel masks to overlap"}
- )
- mask_channel_min_space: int = field(
- default=1,
- metadata={"help": "min space between spans (if no overlap is enabled)"},
- )
-
- # negative selection
- num_negatives: int = field(
- default=100,
- metadata={"help": "number of negative examples from the same sample"},
- )
- negatives_from_everywhere: bool = field(
- default=False,
- metadata={"help": "sample negatives from everywhere, not just masked states"},
- )
- cross_sample_negatives: int = field(
- default=0, metadata={"help": "number of negative examples from the any sample"}
- )
- codebook_negatives: int = field(
- default=0, metadata={"help": "number of negative examples codebook"}
- )
-
- # positional embeddings
- conv_pos: int = field(
- default=128,
- metadata={"help": "number of filters for convolutional positional embeddings"},
- )
- conv_pos_groups: int = field(
- default=16,
- metadata={"help": "number of groups for convolutional positional embedding"},
- )
-
- latent_temp: Tuple[float, float, float] = field(
- default=(2, 0.5, 0.999995),
- metadata={
- "help": "temperature for latent variable sampling. "
- "can be tuple of 3 values (start, end, decay)"
- },
- )
-
-
-@register_model("wav2vec2", dataclass=Wav2Vec2Config)
-class Wav2Vec2Model(BaseFairseqModel):
- def __init__(self, cfg: Wav2Vec2Config):
- super().__init__()
- self.cfg = cfg
-
- feature_enc_layers = eval(cfg.conv_feature_layers)
- self.embed = feature_enc_layers[-1][0]
-
- self.feature_extractor = ConvFeatureExtractionModel(
- conv_layers=feature_enc_layers,
- dropout=0.0,
- mode=cfg.extractor_mode,
- conv_bias=cfg.conv_bias,
- )
-
- self.post_extract_proj = (
- nn.Linear(self.embed, cfg.encoder_embed_dim)
- if self.embed != cfg.encoder_embed_dim and not cfg.quantize_input
- else None
- )
-
- self.mask_prob = cfg.mask_prob
- self.mask_selection = cfg.mask_selection
- self.mask_other = cfg.mask_other
- self.mask_length = cfg.mask_length
- self.no_mask_overlap = cfg.no_mask_overlap
- self.mask_min_space = cfg.mask_min_space
-
- self.mask_channel_prob = cfg.mask_channel_prob
- self.mask_channel_before = cfg.mask_channel_before
- self.mask_channel_selection = cfg.mask_channel_selection
- self.mask_channel_other = cfg.mask_channel_other
- self.mask_channel_length = cfg.mask_channel_length
- self.no_mask_channel_overlap = cfg.no_mask_channel_overlap
- self.mask_channel_min_space = cfg.mask_channel_min_space
-
- self.dropout_input = nn.Dropout(cfg.dropout_input)
- self.dropout_features = nn.Dropout(cfg.dropout_features)
-
- self.feature_grad_mult = cfg.feature_grad_mult
-
- self.quantizer = None
- self.input_quantizer = None
-
- self.n_negatives = cfg.num_negatives
- self.cross_sample_negatives = cfg.cross_sample_negatives
- self.codebook_negatives = cfg.codebook_negatives
- self.negatives_from_everywhere = cfg.negatives_from_everywhere
-
- self.logit_temp = cfg.logit_temp
-
- final_dim = cfg.final_dim if cfg.final_dim > 0 else cfg.encoder_embed_dim
-
- if cfg.quantize_targets:
- vq_dim = cfg.latent_dim if cfg.latent_dim > 0 else final_dim
- self.quantizer = GumbelVectorQuantizer(
- dim=self.embed,
- num_vars=cfg.latent_vars,
- temp=cfg.latent_temp,
- groups=cfg.latent_groups,
- combine_groups=False,
- vq_dim=vq_dim,
- time_first=True,
- weight_proj_depth=cfg.quantizer_depth,
- weight_proj_factor=cfg.quantizer_factor,
- )
- self.project_q = nn.Linear(vq_dim, final_dim)
- else:
- self.project_q = nn.Linear(self.embed, final_dim)
-
- if cfg.quantize_input:
- if cfg.same_quantizer and self.quantizer is not None:
- vq_dim = final_dim
- self.input_quantizer = self.quantizer
- else:
- vq_dim = cfg.latent_dim if cfg.latent_dim > 0 else cfg.encoder_embed_dim
- self.input_quantizer = GumbelVectorQuantizer(
- dim=self.embed,
- num_vars=cfg.latent_vars,
- temp=cfg.latent_temp,
- groups=cfg.latent_groups,
- combine_groups=False,
- vq_dim=vq_dim,
- time_first=True,
- weight_proj_depth=cfg.quantizer_depth,
- weight_proj_factor=cfg.quantizer_factor,
- )
- self.project_inp = nn.Linear(vq_dim, cfg.encoder_embed_dim)
-
- self.mask_emb = nn.Parameter(
- torch.FloatTensor(cfg.encoder_embed_dim).uniform_()
- )
-
- self.encoder = TransformerEncoder(cfg)
- self.layer_norm = LayerNorm(self.embed)
-
- self.target_glu = None
- if cfg.target_glu:
- self.target_glu = nn.Sequential(
- nn.Linear(final_dim, final_dim * 2), nn.GLU()
- )
-
- self.final_proj = nn.Linear(cfg.encoder_embed_dim, final_dim)
-
- def upgrade_state_dict_named(self, state_dict, name):
- super().upgrade_state_dict_named(state_dict, name)
- """Upgrade a (possibly old) state dict for new versions of fairseq."""
- return state_dict
-
- @classmethod
- def build_model(cls, cfg: Wav2Vec2Config, task=None):
- """Build a new model instance."""
-
- return cls(cfg)
-
- def apply_mask(
- self,
- x,
- padding_mask,
- mask_indices=None,
- mask_channel_indices=None,
- ):
- B, T, C = x.shape
-
- if self.mask_channel_prob > 0 and self.mask_channel_before:
- mask_channel_indices = compute_mask_indices(
- (B, C),
- None,
- self.mask_channel_prob,
- self.mask_channel_length,
- self.mask_channel_selection,
- self.mask_channel_other,
- no_overlap=self.no_mask_channel_overlap,
- min_space=self.mask_channel_min_space,
- )
- mask_channel_indices = (
- torch.from_numpy(mask_channel_indices)
- .to(x.device)
- .unsqueeze(1)
- .expand(-1, T, -1)
- )
- x[mask_channel_indices] = 0
-
- if self.mask_prob > 0:
- if mask_indices is None:
- mask_indices = compute_mask_indices(
- (B, T),
- padding_mask,
- self.mask_prob,
- self.mask_length,
- self.mask_selection,
- self.mask_other,
- min_masks=2,
- no_overlap=self.no_mask_overlap,
- min_space=self.mask_min_space,
- )
- mask_indices = torch.from_numpy(mask_indices).to(x.device)
- x = index_put(x, mask_indices, self.mask_emb)
- else:
- mask_indices = None
-
- if self.mask_channel_prob > 0 and not self.mask_channel_before:
- if mask_channel_indices is None:
- mask_channel_indices = compute_mask_indices(
- (B, C),
- None,
- self.mask_channel_prob,
- self.mask_channel_length,
- self.mask_channel_selection,
- self.mask_channel_other,
- no_overlap=self.no_mask_channel_overlap,
- min_space=self.mask_channel_min_space,
- )
- mask_channel_indices = (
- torch.from_numpy(mask_channel_indices)
- .to(x.device)
- .unsqueeze(1)
- .expand(-1, T, -1)
- )
- x = index_put(x, mask_channel_indices, 0)
-
- return x, mask_indices
-
- def sample_negatives(self, y, num, padding_count=None):
-
- if self.n_negatives == 0 and self.cross_sample_negatives == 0:
- return y.new(0)
-
- bsz, tsz, fsz = y.shape
- y = y.view(-1, fsz) # BTC => (BxT)C
-
- # FIXME: what happens if padding_count is specified?
- cross_high = tsz * bsz
- high = tsz - (padding_count or 0)
- with torch.no_grad():
- assert high > 1, f"{bsz,tsz,fsz}"
-
- if self.n_negatives > 0:
- tszs = (
- buffered_arange(num)
- .unsqueeze(-1)
- .expand(-1, self.n_negatives)
- .flatten()
- )
-
- neg_idxs = torch.randint(
- low=0, high=high - 1, size=(bsz, self.n_negatives * num)
- )
- neg_idxs[neg_idxs >= tszs] += 1
-
- if self.cross_sample_negatives > 0:
- tszs = (
- buffered_arange(num)
- .unsqueeze(-1)
- .expand(-1, self.cross_sample_negatives)
- .flatten()
- )
-
- cross_neg_idxs = torch.randint(
- low=0,
- high=cross_high - 1,
- size=(bsz, self.cross_sample_negatives * num),
- )
- cross_neg_idxs[cross_neg_idxs >= tszs] += 1
-
- if self.n_negatives > 0:
- for i in range(1, bsz):
- neg_idxs[i] += i * high
- else:
- neg_idxs = cross_neg_idxs
-
- if self.cross_sample_negatives > 0 and self.n_negatives > 0:
- neg_idxs = torch.cat([neg_idxs, cross_neg_idxs], dim=1)
-
- negs = y[neg_idxs.view(-1)]
- negs = negs.view(
- bsz, num, self.n_negatives + self.cross_sample_negatives, fsz
- ).permute(
- 2, 0, 1, 3
- ) # to NxBxTxC
- return negs, neg_idxs
-
- def compute_preds(self, x, y, negatives):
-
- neg_is_pos = (y == negatives).all(-1)
- y = y.unsqueeze(0)
- targets = torch.cat([y, negatives], dim=0)
-
- logits = torch.cosine_similarity(x.float(), targets.float(), dim=-1).type_as(x)
-
- logits = logits / self.logit_temp
-
- if is_xla_tensor(logits) or neg_is_pos.any():
- fillval = -float(2 ** 30)
- if not hasattr(self, "_inftensor"):
- self._inftensor = (
- torch.tensor(fillval).to(x.device)
- if is_xla_tensor(logits)
- else float("-inf")
- )
- logits[1:] = index_put(logits[1:], neg_is_pos, self._inftensor)
-
- return logits
-
- def _get_feat_extract_output_lengths(self, input_lengths: torch.LongTensor):
- """
- Computes the output length of the convolutional layers
- """
-
- def _conv_out_length(input_length, kernel_size, stride):
- return torch.floor((input_length - kernel_size) / stride + 1)
-
- conv_cfg_list = eval(self.cfg.conv_feature_layers)
-
- for i in range(len(conv_cfg_list)):
- input_lengths = _conv_out_length(
- input_lengths, conv_cfg_list[i][1], conv_cfg_list[i][2]
- )
-
- return input_lengths.to(torch.long)
-
- def forward(
- self,
- source,
- padding_mask=None,
- mask=True,
- features_only=False,
- layer=None,
- mask_indices=None,
- mask_channel_indices=None,
- padding_count=None,
- ):
-
- if self.feature_grad_mult > 0:
- features = self.feature_extractor(source)
- if self.feature_grad_mult != 1.0:
- features = GradMultiply.apply(features, self.feature_grad_mult)
- else:
- with torch.no_grad():
- features = self.feature_extractor(source)
-
- features_pen = features.float().pow(2).mean()
-
- features = features.transpose(1, 2)
- features = self.layer_norm(features)
- unmasked_features = features.clone()
-
- if padding_mask is not None and padding_mask.any():
- input_lengths = (1 - padding_mask.long()).sum(-1)
- # apply conv formula to get real output_lengths
- output_lengths = self._get_feat_extract_output_lengths(input_lengths)
-
- padding_mask = torch.zeros(
- features.shape[:2], dtype=features.dtype, device=features.device
- )
-
- # these two operations makes sure that all values
- # before the output lengths indices are attended to
- padding_mask[
- (
- torch.arange(padding_mask.shape[0], device=padding_mask.device),
- output_lengths - 1,
- )
- ] = 1
- padding_mask = (1 - padding_mask.flip([-1]).cumsum(-1).flip([-1])).bool()
- else:
- padding_mask = None
-
- if self.post_extract_proj is not None:
- features = self.post_extract_proj(features)
-
- features = self.dropout_input(features)
- unmasked_features = self.dropout_features(unmasked_features)
-
- num_vars = None
- code_ppl = None
- prob_ppl = None
- curr_temp = None
-
- if self.input_quantizer:
- q = self.input_quantizer(features, produce_targets=False)
- features = q["x"]
- num_vars = q["num_vars"]
- code_ppl = q["code_perplexity"]
- prob_ppl = q["prob_perplexity"]
- curr_temp = q["temp"]
- features = self.project_inp(features)
-
- if mask:
- x, mask_indices = self.apply_mask(
- features,
- padding_mask,
- mask_indices=mask_indices,
- mask_channel_indices=mask_channel_indices,
- )
- if not is_xla_tensor(x) and mask_indices is not None:
- # tpu-comment: reducing the size in a dynamic way causes
- # too many recompilations on xla.
- y = unmasked_features[mask_indices].view(
- unmasked_features.size(0), -1, unmasked_features.size(-1)
- )
- else:
- y = unmasked_features
- else:
- x = features
- y = unmasked_features
- mask_indices = None
-
- x, layer_results = self.encoder(x, padding_mask=padding_mask, layer=layer)
-
- if features_only:
- return {
- "x": x,
- "padding_mask": padding_mask,
- "features": unmasked_features,
- "layer_results": layer_results,
- }
-
- if self.quantizer:
- q = self.quantizer(y, produce_targets=False)
- y = q["x"]
- num_vars = q["num_vars"]
- code_ppl = q["code_perplexity"]
- prob_ppl = q["prob_perplexity"]
- curr_temp = q["temp"]
-
- y = self.project_q(y)
-
- if self.negatives_from_everywhere:
- neg_cands = self.quantizer(unmasked_features, produce_targets=False)[
- "x"
- ]
- negs, _ = self.sample_negatives(
- neg_cands,
- y.size(1),
- padding_count=padding_count,
- )
- negs = self.project_q(negs)
-
- else:
- negs, _ = self.sample_negatives(
- y,
- y.size(1),
- padding_count=padding_count,
- )
-
- if self.codebook_negatives > 0:
- cb_negs = self.quantizer.sample_from_codebook(
- y.size(0) * y.size(1), self.codebook_negatives
- )
- cb_negs = cb_negs.view(
- self.codebook_negatives, y.size(0), y.size(1), -1
- ) # order doesnt matter
- cb_negs = self.project_q(cb_negs)
- negs = torch.cat([negs, cb_negs], dim=0)
- else:
- y = self.project_q(y)
-
- if self.negatives_from_everywhere:
- negs, _ = self.sample_negatives(
- unmasked_features,
- y.size(1),
- padding_count=padding_count,
- )
- negs = self.project_q(negs)
- else:
- negs, _ = self.sample_negatives(
- y,
- y.size(1),
- padding_count=padding_count,
- )
-
- if not is_xla_tensor(x):
- # tpu-comment: reducing the size in a dynamic way causes
- # too many recompilations on xla.
- x = x[mask_indices].view(x.size(0), -1, x.size(-1))
-
- if self.target_glu:
- y = self.target_glu(y)
- negs = self.target_glu(negs)
-
- x = self.final_proj(x)
- x = self.compute_preds(x, y, negs)
-
- result = {
- "x": x,
- "padding_mask": padding_mask,
- "features_pen": features_pen,
- }
-
- if prob_ppl is not None:
- result["prob_perplexity"] = prob_ppl
- result["code_perplexity"] = code_ppl
- result["num_vars"] = num_vars
- result["temp"] = curr_temp
-
- return result
-
- def quantize(self, x):
- assert self.quantizer is not None
- x = self.feature_extractor(x)
- x = x.transpose(1, 2)
- x = self.layer_norm(x)
- return self.quantizer.forward_idx(x)
-
- def extract_features(self, source, padding_mask, mask=False, layer=None):
- res = self.forward(
- source, padding_mask, mask=mask, features_only=True, layer=layer
- )
- return res
-
- def get_logits(self, net_output):
- logits = net_output["x"]
- logits = logits.transpose(0, 2)
- logits = logits.reshape(-1, logits.size(-1))
- return logits
-
- def get_targets(self, sample, net_output, expand_steps=True):
- x = net_output["x"]
- return x.new_zeros(x.size(1) * x.size(2), dtype=torch.long)
-
- def get_extra_losses(self, net_output):
- pen = []
-
- if "prob_perplexity" in net_output:
- pen.append(
- (net_output["num_vars"] - net_output["prob_perplexity"])
- / net_output["num_vars"]
- )
-
- if "features_pen" in net_output:
- pen.append(net_output["features_pen"])
-
- return pen
-
- def remove_pretraining_modules(self):
- self.quantizer = None
- self.project_q = None
- self.target_glu = None
- self.final_proj = None
-
-
-class ConvFeatureExtractionModel(nn.Module):
- def __init__(
- self,
- conv_layers: List[Tuple[int, int, int]],
- dropout: float = 0.0,
- mode: str = "default",
- conv_bias: bool = False,
- ):
- super().__init__()
-
- assert mode in {"default", "layer_norm"}
-
- def block(
- n_in,
- n_out,
- k,
- stride,
- is_layer_norm=False,
- is_group_norm=False,
- conv_bias=False,
- ):
- def make_conv():
- conv = nn.Conv1d(n_in, n_out, k, stride=stride, bias=conv_bias)
- nn.init.kaiming_normal_(conv.weight)
- return conv
-
- assert (
- is_layer_norm and is_group_norm
- ) == False, "layer norm and group norm are exclusive"
-
- if is_layer_norm:
- return nn.Sequential(
- make_conv(),
- nn.Dropout(p=dropout),
- nn.Sequential(
- TransposeLast(),
- Fp32LayerNorm(dim, elementwise_affine=True),
- TransposeLast(),
- ),
- nn.GELU(),
- )
- elif is_group_norm:
- return nn.Sequential(
- make_conv(),
- nn.Dropout(p=dropout),
- Fp32GroupNorm(dim, dim, affine=True),
- nn.GELU(),
- )
- else:
- return nn.Sequential(make_conv(), nn.Dropout(p=dropout), nn.GELU())
-
- in_d = 1
- self.conv_layers = nn.ModuleList()
- for i, cl in enumerate(conv_layers):
- assert len(cl) == 3, "invalid conv definition: " + str(cl)
- (dim, k, stride) = cl
-
- self.conv_layers.append(
- block(
- in_d,
- dim,
- k,
- stride,
- is_layer_norm=mode == "layer_norm",
- is_group_norm=mode == "default" and i == 0,
- conv_bias=conv_bias,
- )
- )
- in_d = dim
-
- def forward(self, x):
-
- # BxT -> BxCxT
- x = x.unsqueeze(1)
-
- for conv in self.conv_layers:
- x = conv(x)
-
- return x
-
-
-class TransformerEncoder(nn.Module):
- def __init__(self, args):
- super().__init__()
-
- self.dropout = args.dropout
- self.embedding_dim = args.encoder_embed_dim
-
- self.pos_conv = nn.Conv1d(
- self.embedding_dim,
- self.embedding_dim,
- kernel_size=args.conv_pos,
- padding=args.conv_pos // 2,
- groups=args.conv_pos_groups,
- )
- dropout = 0
- std = math.sqrt((4 * (1.0 - dropout)) / (args.conv_pos * self.embedding_dim))
- nn.init.normal_(self.pos_conv.weight, mean=0, std=std)
- nn.init.constant_(self.pos_conv.bias, 0)
-
- self.pos_conv = nn.utils.weight_norm(self.pos_conv, name="weight", dim=2)
- self.pos_conv = nn.Sequential(self.pos_conv, SamePad(args.conv_pos), nn.GELU())
-
- self.layers = nn.ModuleList(
- [
- TransformerSentenceEncoderLayer(
- embedding_dim=self.embedding_dim,
- ffn_embedding_dim=args.encoder_ffn_embed_dim,
- num_attention_heads=args.encoder_attention_heads,
- dropout=self.dropout,
- attention_dropout=args.attention_dropout,
- activation_dropout=args.activation_dropout,
- activation_fn=args.activation_fn,
- layer_norm_first=args.layer_norm_first,
- )
- for _ in range(args.encoder_layers)
- ]
- )
-
- self.layer_norm_first = args.layer_norm_first
- self.layer_norm = LayerNorm(self.embedding_dim)
- self.layerdrop = args.encoder_layerdrop
-
- self.apply(init_bert_params)
-
- def forward(self, x, padding_mask=None, layer=None):
- x, layer_results = self.extract_features(x, padding_mask, layer)
-
- if self.layer_norm_first and layer is None:
- x = self.layer_norm(x)
-
- return x, layer_results
-
- def extract_features(self, x, padding_mask=None, tgt_layer=None):
-
- if padding_mask is not None:
- x = index_put(x, padding_mask, 0)
-
- x_conv = self.pos_conv(x.transpose(1, 2))
- x_conv = x_conv.transpose(1, 2)
- x = x + x_conv
-
- if not self.layer_norm_first:
- x = self.layer_norm(x)
-
- x = F.dropout(x, p=self.dropout, training=self.training)
-
- # B x T x C -> T x B x C
- x = x.transpose(0, 1)
-
- layer_results = []
- r = None
- for i, layer in enumerate(self.layers):
- dropout_probability = np.random.random()
- if not self.training or (dropout_probability > self.layerdrop):
- x, z = layer(x, self_attn_padding_mask=padding_mask, need_weights=False)
- if tgt_layer is not None:
- layer_results.append((x, z))
- if i == tgt_layer:
- r = x
- break
-
- if r is not None:
- x = r
-
- # T x B x C -> B x T x C
- x = x.transpose(0, 1)
-
- return x, layer_results
-
- def max_positions(self):
- """Maximum output length supported by the encoder."""
- return self.args.max_positions
-
- def upgrade_state_dict_named(self, state_dict, name):
- """Upgrade a (possibly old) state dict for new versions of fairseq."""
- return state_dict
-
-
-class TransformerSentenceEncoderLayer(nn.Module):
- """
- Implements a Transformer Encoder Layer used in BERT/XLM style pre-trained
- models.
- """
-
- def __init__(
- self,
- embedding_dim: float = 768,
- ffn_embedding_dim: float = 3072,
- num_attention_heads: float = 8,
- dropout: float = 0.1,
- attention_dropout: float = 0.1,
- activation_dropout: float = 0.1,
- activation_fn: str = "relu",
- layer_norm_first: bool = False,
- ) -> None:
-
- super().__init__()
- # Initialize parameters
- self.embedding_dim = embedding_dim
- self.dropout = dropout
- self.activation_dropout = activation_dropout
-
- # Initialize blocks
- self.activation_fn = utils.get_activation_fn(activation_fn)
- self.self_attn = MultiheadAttention(
- self.embedding_dim,
- num_attention_heads,
- dropout=attention_dropout,
- self_attention=True,
- )
-
- self.dropout1 = nn.Dropout(dropout)
- self.dropout2 = nn.Dropout(self.activation_dropout)
- self.dropout3 = nn.Dropout(dropout)
-
- self.layer_norm_first = layer_norm_first
-
- # layer norm associated with the self attention layer
- self.self_attn_layer_norm = LayerNorm(self.embedding_dim)
- self.fc1 = nn.Linear(self.embedding_dim, ffn_embedding_dim)
- self.fc2 = nn.Linear(ffn_embedding_dim, self.embedding_dim)
-
- # layer norm associated with the position wise feed-forward NN
- self.final_layer_norm = LayerNorm(self.embedding_dim)
-
- def forward(
- self,
- x: torch.Tensor,
- self_attn_mask: torch.Tensor = None,
- self_attn_padding_mask: torch.Tensor = None,
- need_weights: bool = False,
- att_args=None,
- ):
- """
- LayerNorm is applied either before or after the self-attention/ffn
- modules similar to the original Transformer imlementation.
- """
- residual = x
-
- if self.layer_norm_first:
- x = self.self_attn_layer_norm(x)
- x, attn = self.self_attn(
- query=x,
- key=x,
- value=x,
- key_padding_mask=self_attn_padding_mask,
- attn_mask=self_attn_mask,
- )
- x = self.dropout1(x)
- x = residual + x
-
- residual = x
- x = self.final_layer_norm(x)
- x = self.activation_fn(self.fc1(x))
- x = self.dropout2(x)
- x = self.fc2(x)
- x = self.dropout3(x)
- x = residual + x
- else:
- x, attn = self.self_attn(
- query=x,
- key=x,
- value=x,
- key_padding_mask=self_attn_padding_mask,
- )
-
- x = self.dropout1(x)
- x = residual + x
-
- x = self.self_attn_layer_norm(x)
-
- residual = x
- x = self.activation_fn(self.fc1(x))
- x = self.dropout2(x)
- x = self.fc2(x)
- x = self.dropout3(x)
- x = residual + x
- x = self.final_layer_norm(x)
-
- return x, attn
diff --git a/spaces/sriramelango/Social_Classification_Public/utils/checkpoint_utils.py b/spaces/sriramelango/Social_Classification_Public/utils/checkpoint_utils.py
deleted file mode 100644
index 8fed4bc2a214833ab1153d5bc3ff6756db25048b..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/utils/checkpoint_utils.py
+++ /dev/null
@@ -1,875 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import ast
-import collections
-import contextlib
-import logging
-import numpy as np
-import os
-import re
-import time
-import traceback
-import math
-from collections import OrderedDict
-from typing import Any, Dict, Optional, Union
-
-import torch
-from fairseq.dataclass.configs import CheckpointConfig
-from fairseq.dataclass.utils import (
- convert_namespace_to_omegaconf,
- overwrite_args_by_name,
-)
-from fairseq.distributed.fully_sharded_data_parallel import FSDP, has_FSDP
-from fairseq.file_io import PathManager
-from fairseq.models import FairseqDecoder, FairseqEncoder
-from omegaconf import DictConfig, open_dict, OmegaConf
-
-from data import data_utils
-
-logger = logging.getLogger(__name__)
-
-
-def save_checkpoint(cfg: CheckpointConfig, trainer, epoch_itr, val_loss):
- from fairseq import meters
-
- # only one worker should attempt to create the required dir
- if trainer.data_parallel_rank == 0:
- os.makedirs(cfg.save_dir, exist_ok=True)
-
- prev_best = getattr(save_checkpoint, "best", val_loss)
- if val_loss is not None:
- best_function = max if cfg.maximize_best_checkpoint_metric else min
- save_checkpoint.best = best_function(val_loss, prev_best)
-
- if cfg.no_save:
- return
-
- trainer.consolidate_optimizer() # TODO(SS): do we need this if no_save_optimizer_state
-
- if not trainer.should_save_checkpoint_on_current_rank:
- if trainer.always_call_state_dict_during_save_checkpoint:
- trainer.state_dict()
- return
-
- write_timer = meters.StopwatchMeter()
- write_timer.start()
-
- epoch = epoch_itr.epoch
- end_of_epoch = epoch_itr.end_of_epoch()
- updates = trainer.get_num_updates()
-
- logger.info(f"Preparing to save checkpoint for epoch {epoch} @ {updates} updates")
-
- def is_better(a, b):
- return a >= b if cfg.maximize_best_checkpoint_metric else a <= b
-
- suffix = trainer.checkpoint_suffix
- checkpoint_conds = collections.OrderedDict()
- checkpoint_conds["checkpoint{}{}.pt".format(epoch, suffix)] = (
- end_of_epoch and not cfg.no_epoch_checkpoints and epoch % cfg.save_interval == 0
- )
- checkpoint_conds["checkpoint_{}_{}{}.pt".format(epoch, updates, suffix)] = (
- not end_of_epoch
- and cfg.save_interval_updates > 0
- and updates % cfg.save_interval_updates == 0
- )
- checkpoint_conds["checkpoint_best{}.pt".format(suffix)] = val_loss is not None and (
- not hasattr(save_checkpoint, "best")
- or is_better(val_loss, save_checkpoint.best)
- )
- if val_loss is not None and cfg.keep_best_checkpoints > 0:
- worst_best = getattr(save_checkpoint, "best", None)
- chkpts = checkpoint_paths(
- cfg.save_dir,
- pattern=r"checkpoint\.best_{}_(\d+\.?\d*){}\.pt".format(
- cfg.best_checkpoint_metric, suffix
- ),
- )
- if len(chkpts) > 0:
- p = chkpts[-1] if cfg.maximize_best_checkpoint_metric else chkpts[0]
- worst_best = float(p.rsplit("_")[-1].replace("{}.pt".format(suffix), ""))
- # add random digits to resolve ties
- with data_utils.numpy_seed(epoch, updates, val_loss):
- rand_sfx = np.random.randint(0, cfg.keep_best_checkpoints)
-
- checkpoint_conds[
- "checkpoint.best_{}_{:.3f}{}{}.pt".format(
- cfg.best_checkpoint_metric,
- val_loss,
- rand_sfx,
- suffix
- )
- ] = worst_best is None or is_better(val_loss, worst_best)
- checkpoint_conds[
- "checkpoint_last{}.pt".format(suffix)
- ] = not cfg.no_last_checkpoints
-
- extra_state = {"train_iterator": epoch_itr.state_dict(), "val_loss": val_loss}
- if hasattr(save_checkpoint, "best"):
- extra_state.update({"best": save_checkpoint.best})
-
- checkpoints = [
- os.path.join(cfg.save_dir, fn) for fn, cond in checkpoint_conds.items() if cond
- ]
- if len(checkpoints) > 0:
- trainer.save_checkpoint(checkpoints[0], extra_state)
- for cp in checkpoints[1:]:
- if cfg.write_checkpoints_asynchronously:
- # TODO[ioPath]: Need to implement a delayed asynchronous
- # file copying/moving feature.
- logger.warning(
- f"ioPath is not copying {checkpoints[0]} to {cp} "
- "since async write mode is on."
- )
- else:
- assert PathManager.copy(
- checkpoints[0], cp, overwrite=True
- ), f"Failed to copy {checkpoints[0]} to {cp}"
-
- write_timer.stop()
- logger.info(
- "Saved checkpoint {} (epoch {} @ {} updates, score {}) (writing took {} seconds)".format(
- checkpoints[0], epoch, updates, val_loss, write_timer.sum
- )
- )
-
- if not end_of_epoch and cfg.keep_interval_updates > 0:
- # remove old checkpoints; checkpoints are sorted in descending order
- if cfg.keep_interval_updates_pattern == -1:
- checkpoints = checkpoint_paths(
- cfg.save_dir, pattern=r"checkpoint_\d+_(\d+){}\.pt".format(suffix)
- )
- else:
- checkpoints = checkpoint_paths(
- cfg.save_dir,
- pattern=r"checkpoint_\d+_(\d+){}\.pt".format(suffix),
- keep_match=True,
- )
- checkpoints = [
- x[0]
- for x in checkpoints
- if x[1] % cfg.keep_interval_updates_pattern != 0
- ]
-
- for old_chk in checkpoints[cfg.keep_interval_updates :]:
- if os.path.lexists(old_chk):
- os.remove(old_chk)
- elif PathManager.exists(old_chk):
- PathManager.rm(old_chk)
-
- if cfg.keep_last_epochs > 0:
- # remove old epoch checkpoints; checkpoints are sorted in descending order
- checkpoints = checkpoint_paths(
- cfg.save_dir, pattern=r"checkpoint(\d+){}\.pt".format(suffix)
- )
- for old_chk in checkpoints[cfg.keep_last_epochs :]:
- if os.path.lexists(old_chk):
- os.remove(old_chk)
- elif PathManager.exists(old_chk):
- PathManager.rm(old_chk)
-
- if cfg.keep_best_checkpoints > 0:
- # only keep the best N checkpoints according to validation metric
- checkpoints = checkpoint_paths(
- cfg.save_dir,
- pattern=r"checkpoint\.best_{}_(\d+\.?\d*){}\.pt".format(
- cfg.best_checkpoint_metric, suffix
- ),
- )
- if not cfg.maximize_best_checkpoint_metric:
- checkpoints = checkpoints[::-1]
- for old_chk in checkpoints[cfg.keep_best_checkpoints :]:
- if os.path.lexists(old_chk):
- os.remove(old_chk)
- elif PathManager.exists(old_chk):
- PathManager.rm(old_chk)
-
-
-def load_checkpoint(cfg: CheckpointConfig, trainer, **passthrough_args):
- """
- Load a checkpoint and restore the training iterator.
-
- *passthrough_args* will be passed through to
- ``trainer.get_train_iterator``.
- """
-
- reset_optimizer = cfg.reset_optimizer
- reset_lr_scheduler = cfg.reset_lr_scheduler
- optimizer_overrides = ast.literal_eval(cfg.optimizer_overrides)
- reset_meters = cfg.reset_meters
- reset_dataloader = cfg.reset_dataloader
-
- if cfg.finetune_from_model is not None and (
- reset_optimizer or reset_lr_scheduler or reset_meters or reset_dataloader
- ):
- raise ValueError(
- "--finetune-from-model can not be set together with either --reset-optimizer"
- " or reset_lr_scheduler or reset_meters or reset_dataloader"
- )
-
- suffix = trainer.checkpoint_suffix
- if (
- cfg.restore_file == "checkpoint_last.pt"
- ): # default value of restore_file is 'checkpoint_last.pt'
- checkpoint_path = os.path.join(
- cfg.save_dir, "checkpoint_last{}.pt".format(suffix)
- )
- first_launch = not PathManager.exists(checkpoint_path)
- if cfg.finetune_from_model is not None and first_launch:
- # if there is no last checkpoint to restore, start the finetune from pretrained model
- # else just use usual logic to load checkpoint, e.g. restart from last checkpoint and etc.
- if PathManager.exists(cfg.finetune_from_model):
- checkpoint_path = cfg.finetune_from_model
- reset_optimizer = True
- reset_lr_scheduler = True
- reset_meters = True
- reset_dataloader = True
- logger.info(
- f"loading pretrained model from {checkpoint_path}: "
- "optimizer, lr scheduler, meters, dataloader will be reset"
- )
- else:
- raise ValueError(
- f"--funetune-from-model {cfg.finetune_from_model} does not exist"
- )
- elif suffix is not None:
- checkpoint_path = cfg.restore_file.replace(".pt", suffix + ".pt")
- else:
- checkpoint_path = cfg.restore_file
-
- if cfg.restore_file != "checkpoint_last.pt" and cfg.finetune_from_model:
- raise ValueError(
- "--finetune-from-model and --restore-file (non-default value) "
- "can not be specified together: " + str(cfg)
- )
-
- extra_state = trainer.load_checkpoint(
- checkpoint_path,
- reset_optimizer,
- reset_lr_scheduler,
- optimizer_overrides,
- reset_meters=reset_meters,
- )
-
- if (
- extra_state is not None
- and "best" in extra_state
- and not reset_optimizer
- and not reset_meters
- ):
- save_checkpoint.best = extra_state["best"]
-
- if extra_state is not None and not reset_dataloader:
- # restore iterator from checkpoint
- itr_state = extra_state["train_iterator"]
- epoch_itr = trainer.get_train_iterator(
- epoch=itr_state["epoch"], load_dataset=True, **passthrough_args
- )
- epoch_itr.load_state_dict(itr_state)
- _n = itr_state['iterations_in_epoch']
- offset = sum(len(_) for _ in epoch_itr.batch_sampler[:_n])
- epoch_itr.dataset.dataset._seek(offset=offset)
- true_num = int(math.ceil(len(epoch_itr.dataset) / 8)) * 8
- another_offset = ((epoch_itr.epoch - 1) * true_num + offset) // 8
- if hasattr(epoch_itr.dataset, 'pure_text_dataset'):
- text_offset = (2 * another_offset) % len(epoch_itr.dataset.pure_text_dataset)
- epoch_itr.dataset.pure_text_dataset._seek(offset=text_offset)
- if hasattr(epoch_itr.dataset, 'pure_image_dataset'):
- image_offset = another_offset % len(epoch_itr.dataset.pure_image_dataset)
- epoch_itr.dataset.pure_image_dataset._seek(offset=image_offset)
- if hasattr(epoch_itr.dataset, 'detection_dataset'):
- detection_offset = another_offset % len(epoch_itr.dataset.detection_dataset)
- epoch_itr.dataset.detection_dataset._seek(offset=detection_offset)
- else:
- epoch_itr = trainer.get_train_iterator(
- epoch=1, load_dataset=True, **passthrough_args
- )
-
- trainer.lr_step(epoch_itr.epoch)
-
- return extra_state, epoch_itr
-
-
-def load_checkpoint_to_cpu(path, arg_overrides=None, load_on_all_ranks=False):
- """Loads a checkpoint to CPU (with upgrading for backward compatibility).
-
- If doing single-GPU training or if the checkpoint is only being loaded by at
- most one process on each node (current default behavior is for only rank 0
- to read the checkpoint from disk), load_on_all_ranks should be False to
- avoid errors from torch.distributed not having been initialized or
- torch.distributed.barrier() hanging.
-
- If all processes on each node may be loading the checkpoint
- simultaneously, load_on_all_ranks should be set to True to avoid I/O
- conflicts.
-
- There's currently no support for > 1 but < all processes loading the
- checkpoint on each node.
- """
- local_path = PathManager.get_local_path(path)
- # The locally cached file returned by get_local_path() may be stale for
- # remote files that are periodically updated/overwritten (ex:
- # checkpoint_last.pt) - so we remove the local copy, sync across processes
- # (if needed), and then download a fresh copy.
- if local_path != path and PathManager.path_requires_pathmanager(path):
- try:
- os.remove(local_path)
- except FileNotFoundError:
- # With potentially multiple processes removing the same file, the
- # file being missing is benign (missing_ok isn't available until
- # Python 3.8).
- pass
- if load_on_all_ranks:
- torch.distributed.barrier()
- local_path = PathManager.get_local_path(path)
-
- with open(local_path, "rb") as f:
- state = torch.load(f, map_location=torch.device("cpu"))
-
- if "args" in state and state["args"] is not None and arg_overrides is not None:
- args = state["args"]
- for arg_name, arg_val in arg_overrides.items():
- setattr(args, arg_name, arg_val)
-
- if "cfg" in state and state["cfg"] is not None:
-
- # hack to be able to set Namespace in dict config. this should be removed when we update to newer
- # omegaconf version that supports object flags, or when we migrate all existing models
- from omegaconf import _utils
-
- old_primitive = _utils.is_primitive_type
- _utils.is_primitive_type = lambda _: True
-
- state["cfg"] = OmegaConf.create(state["cfg"])
-
- _utils.is_primitive_type = old_primitive
- OmegaConf.set_struct(state["cfg"], True)
-
- if arg_overrides is not None:
- overwrite_args_by_name(state["cfg"], arg_overrides)
-
- state = _upgrade_state_dict(state)
- return state
-
-
-def load_model_ensemble(
- filenames,
- arg_overrides: Optional[Dict[str, Any]] = None,
- task=None,
- strict=True,
- suffix="",
- num_shards=1,
- state=None,
-):
- """Loads an ensemble of models.
-
- Args:
- filenames (List[str]): checkpoint files to load
- arg_overrides (Dict[str,Any], optional): override model args that
- were used during model training
- task (fairseq.tasks.FairseqTask, optional): task to use for loading
- """
- assert not (
- strict and num_shards > 1
- ), "Cannot load state dict with strict=True and checkpoint shards > 1"
- ensemble, args, _task = load_model_ensemble_and_task(
- filenames,
- arg_overrides,
- task,
- strict,
- suffix,
- num_shards,
- state,
- )
- return ensemble, args
-
-
-def get_maybe_sharded_checkpoint_filename(
- filename: str, suffix: str, shard_idx: int, num_shards: int
-) -> str:
- orig_filename = filename
- filename = filename.replace(".pt", suffix + ".pt")
- fsdp_filename = filename[:-3] + f"-shard{shard_idx}.pt"
- model_parallel_filename = orig_filename[:-3] + f"_part{shard_idx}.pt"
- if PathManager.exists(fsdp_filename):
- return fsdp_filename
- elif num_shards > 1:
- return model_parallel_filename
- else:
- return filename
-
-
-def load_model_ensemble_and_task(
- filenames,
- arg_overrides: Optional[Dict[str, Any]] = None,
- task=None,
- strict=True,
- suffix="",
- num_shards=1,
- state=None,
-):
- assert state is None or len(filenames) == 1
-
- from fairseq import tasks
-
- assert not (
- strict and num_shards > 1
- ), "Cannot load state dict with strict=True and checkpoint shards > 1"
- ensemble = []
- cfg = None
- for filename in filenames:
- orig_filename = filename
- model_shard_state = {"shard_weights": [], "shard_metadata": []}
- assert num_shards > 0
- st = time.time()
- for shard_idx in range(num_shards):
- filename = get_maybe_sharded_checkpoint_filename(
- orig_filename, suffix, shard_idx, num_shards
- )
-
- if not PathManager.exists(filename):
- raise IOError("Model file not found: {}".format(filename))
- if state is None:
- state = load_checkpoint_to_cpu(filename, arg_overrides)
- if "args" in state and state["args"] is not None:
- cfg = convert_namespace_to_omegaconf(state["args"])
- elif "cfg" in state and state["cfg"] is not None:
- cfg = state["cfg"]
- else:
- raise RuntimeError(
- f"Neither args nor cfg exist in state keys = {state.keys()}"
- )
-
- if task is None:
- task = tasks.setup_task(cfg.task)
-
- if "task_state" in state:
- task.load_state_dict(state["task_state"])
-
- if "fsdp_metadata" in state and num_shards > 1:
- model_shard_state["shard_weights"].append(state["model"])
- model_shard_state["shard_metadata"].append(state["fsdp_metadata"])
- # check FSDP import before the code goes too far
- if not has_FSDP:
- raise ImportError(
- "Cannot find FullyShardedDataParallel. "
- "Please install fairscale with: pip install fairscale"
- )
- if shard_idx == num_shards - 1:
- consolidated_model_state = FSDP.consolidate_shard_weights(
- shard_weights=model_shard_state["shard_weights"],
- shard_metadata=model_shard_state["shard_metadata"],
- )
- model = task.build_model(cfg.model)
- model.load_state_dict(
- consolidated_model_state, strict=strict, model_cfg=cfg.model
- )
- else:
- # model parallel checkpoint or unsharded checkpoint
- model = task.build_model(cfg.model)
- model.load_state_dict(
- state["model"], strict=strict, model_cfg=cfg.model
- )
-
- # reset state so it gets loaded for the next model in ensemble
- state = None
- if shard_idx % 10 == 0 and shard_idx > 0:
- elapsed = time.time() - st
- logger.info(
- f"Loaded {shard_idx} shards in {elapsed:.2f}s, {elapsed / (shard_idx+1):.2f}s/shard"
- )
-
- # build model for ensemble
- ensemble.append(model)
- return ensemble, cfg, task
-
-
-def checkpoint_paths(path, pattern=r"checkpoint(\d+)\.pt", keep_match=False):
- """Retrieves all checkpoints found in `path` directory.
-
- Checkpoints are identified by matching filename to the specified pattern. If
- the pattern contains groups, the result will be sorted by the first group in
- descending order.
- """
- pt_regexp = re.compile(pattern)
- files = PathManager.ls(path)
-
- entries = []
- for i, f in enumerate(files):
- m = pt_regexp.fullmatch(f)
- if m is not None:
- idx = float(m.group(1)) if len(m.groups()) > 0 else i
- entries.append((idx, m.group(0)))
- if keep_match:
- return [(os.path.join(path, x[1]), x[0]) for x in sorted(entries, reverse=True)]
- else:
- return [os.path.join(path, x[1]) for x in sorted(entries, reverse=True)]
-
-
-def torch_persistent_save(obj, filename, async_write: bool = False):
- if async_write:
- with PathManager.opena(filename, "wb") as f:
- _torch_persistent_save(obj, f)
- else:
- with PathManager.open(filename, "wb") as f:
- _torch_persistent_save(obj, f)
- # if PathManager.supports_rename(filename):
- # # do atomic save
- # with PathManager.open(filename + ".tmp", "wb") as f:
- # _torch_persistent_save(obj, f)
- # PathManager.rename(filename + ".tmp", filename)
- # else:
- # # fallback to non-atomic save
- # with PathManager.open(filename, "wb") as f:
- # _torch_persistent_save(obj, f)
-
-
-def _torch_persistent_save(obj, f):
- if isinstance(f, str):
- with PathManager.open(f, "wb") as h:
- torch_persistent_save(obj, h)
- return
- for i in range(3):
- try:
- return torch.save(obj, f)
- except Exception:
- if i == 2:
- logger.error(traceback.format_exc())
- raise
-
-
-def _upgrade_state_dict(state):
- """Helper for upgrading old model checkpoints."""
-
- # add optimizer_history
- if "optimizer_history" not in state:
- state["optimizer_history"] = [
- {"criterion_name": "CrossEntropyCriterion", "best_loss": state["best_loss"]}
- ]
- state["last_optimizer_state"] = state["optimizer"]
- del state["optimizer"]
- del state["best_loss"]
- # move extra_state into sub-dictionary
- if "epoch" in state and "extra_state" not in state:
- state["extra_state"] = {
- "epoch": state["epoch"],
- "batch_offset": state["batch_offset"],
- "val_loss": state["val_loss"],
- }
- del state["epoch"]
- del state["batch_offset"]
- del state["val_loss"]
- # reduce optimizer history's memory usage (only keep the last state)
- if "optimizer" in state["optimizer_history"][-1]:
- state["last_optimizer_state"] = state["optimizer_history"][-1]["optimizer"]
- for optim_hist in state["optimizer_history"]:
- del optim_hist["optimizer"]
- # record the optimizer class name
- if "optimizer_name" not in state["optimizer_history"][-1]:
- state["optimizer_history"][-1]["optimizer_name"] = "FairseqNAG"
- # move best_loss into lr_scheduler_state
- if "lr_scheduler_state" not in state["optimizer_history"][-1]:
- state["optimizer_history"][-1]["lr_scheduler_state"] = {
- "best": state["optimizer_history"][-1]["best_loss"]
- }
- del state["optimizer_history"][-1]["best_loss"]
- # keep track of number of updates
- if "num_updates" not in state["optimizer_history"][-1]:
- state["optimizer_history"][-1]["num_updates"] = 0
- # old model checkpoints may not have separate source/target positions
- if (
- "args" in state
- and hasattr(state["args"], "max_positions")
- and not hasattr(state["args"], "max_source_positions")
- ):
- state["args"].max_source_positions = state["args"].max_positions
- state["args"].max_target_positions = state["args"].max_positions
- # use stateful training data iterator
- if "train_iterator" not in state["extra_state"]:
- state["extra_state"]["train_iterator"] = {
- "epoch": state["extra_state"]["epoch"],
- "iterations_in_epoch": state["extra_state"].get("batch_offset", 0),
- }
-
- # backward compatibility, cfg updates
- if "args" in state and state["args"] is not None:
- # default to translation task
- if not hasattr(state["args"], "task"):
- state["args"].task = "translation"
- # --raw-text and --lazy-load are deprecated
- if getattr(state["args"], "raw_text", False):
- state["args"].dataset_impl = "raw"
- elif getattr(state["args"], "lazy_load", False):
- state["args"].dataset_impl = "lazy"
- # epochs start at 1
- if state["extra_state"]["train_iterator"] is not None:
- state["extra_state"]["train_iterator"]["epoch"] = max(
- state["extra_state"]["train_iterator"].get("epoch", 1), 1
- )
- # --remove-bpe ==> --postprocess
- if hasattr(state["args"], "remove_bpe"):
- state["args"].post_process = state["args"].remove_bpe
- # --min-lr ==> --stop-min-lr
- if hasattr(state["args"], "min_lr"):
- state["args"].stop_min_lr = state["args"].min_lr
- del state["args"].min_lr
- # binary_cross_entropy / kd_binary_cross_entropy => wav2vec criterion
- if (
- hasattr(state["args"], "criterion")
- and state["args"].criterion in [
- "binary_cross_entropy",
- "kd_binary_cross_entropy",
- ]
- ):
- state["args"].criterion = "wav2vec"
- # remove log_keys if it's None (criteria will supply a default value of [])
- if hasattr(state["args"], "log_keys") and state["args"].log_keys is None:
- delattr(state["args"], "log_keys")
- # speech_pretraining => audio pretraining
- if (
- hasattr(state["args"], "task")
- and state["args"].task == "speech_pretraining"
- ):
- state["args"].task = "audio_pretraining"
- # audio_cpc => wav2vec
- if hasattr(state["args"], "arch") and state["args"].arch == "audio_cpc":
- state["args"].arch = "wav2vec"
- # convert legacy float learning rate to List[float]
- if hasattr(state["args"], "lr") and isinstance(state["args"].lr, float):
- state["args"].lr = [state["args"].lr]
- # convert task data arg to a string instead of List[string]
- if (
- hasattr(state["args"], "data")
- and isinstance(state["args"].data, list)
- and len(state["args"].data) > 0
- ):
- state["args"].data = state["args"].data[0]
- # remove keys in state["args"] related to teacher-student learning
- for key in [
- "static_teachers",
- "static_teacher_weights",
- "dynamic_teachers",
- "dynamic_teacher_weights",
- ]:
- if key in state["args"]:
- delattr(state["args"], key)
-
- state["cfg"] = convert_namespace_to_omegaconf(state["args"])
-
- if "cfg" in state and state["cfg"] is not None:
- cfg = state["cfg"]
- with open_dict(cfg):
- # any upgrades for Hydra-based configs
- if (
- "task" in cfg
- and "eval_wer_config" in cfg.task
- and isinstance(cfg.task.eval_wer_config.print_alignment, bool)
- ):
- cfg.task.eval_wer_config.print_alignment = "hard"
- if "generation" in cfg and isinstance(cfg.generation.print_alignment, bool):
- cfg.generation.print_alignment = "hard" if cfg.generation.print_alignment else None
- if (
- "model" in cfg
- and "w2v_args" in cfg.model
- and cfg.model.w2v_args is not None
- and (
- hasattr(cfg.model.w2v_args, "task") or "task" in cfg.model.w2v_args
- )
- and hasattr(cfg.model.w2v_args.task, "eval_wer_config")
- and cfg.model.w2v_args.task.eval_wer_config is not None
- and isinstance(
- cfg.model.w2v_args.task.eval_wer_config.print_alignment, bool
- )
- ):
- cfg.model.w2v_args.task.eval_wer_config.print_alignment = "hard"
-
- return state
-
-
-def prune_state_dict(state_dict, model_cfg: Optional[DictConfig]):
- """Prune the given state_dict if desired for LayerDrop
- (https://arxiv.org/abs/1909.11556).
-
- Training with LayerDrop allows models to be robust to pruning at inference
- time. This function prunes state_dict to allow smaller models to be loaded
- from a larger model and re-maps the existing state_dict for this to occur.
-
- It's called by functions that load models from checkpoints and does not
- need to be called directly.
- """
- arch = None
- if model_cfg is not None:
- arch = (
- model_cfg._name
- if isinstance(model_cfg, DictConfig)
- else getattr(model_cfg, "arch", None)
- )
-
- if not model_cfg or arch is None or arch == "ptt_transformer":
- # args should not be none, but don't crash if it is.
- return state_dict
-
- encoder_layers_to_keep = getattr(model_cfg, "encoder_layers_to_keep", None)
- decoder_layers_to_keep = getattr(model_cfg, "decoder_layers_to_keep", None)
-
- if not encoder_layers_to_keep and not decoder_layers_to_keep:
- return state_dict
-
- # apply pruning
- logger.info(
- "Pruning model to specified layer configuration - this works best if the model was trained with LayerDrop"
- )
-
- def create_pruning_pass(layers_to_keep, layer_name):
- keep_layers = sorted(
- int(layer_string) for layer_string in layers_to_keep.split(",")
- )
- mapping_dict = {}
- for i in range(len(keep_layers)):
- mapping_dict[str(keep_layers[i])] = str(i)
-
- regex = re.compile(r"^{layer}.*\.layers\.(\d+)".format(layer=layer_name))
- return {"substitution_regex": regex, "mapping_dict": mapping_dict}
-
- pruning_passes = []
- if encoder_layers_to_keep:
- pruning_passes.append(create_pruning_pass(encoder_layers_to_keep, "encoder"))
- if decoder_layers_to_keep:
- pruning_passes.append(create_pruning_pass(decoder_layers_to_keep, "decoder"))
-
- new_state_dict = {}
- for layer_name in state_dict.keys():
- match = re.search(r"\.layers\.(\d+)\.", layer_name)
- # if layer has no number in it, it is a supporting layer, such as an
- # embedding
- if not match:
- new_state_dict[layer_name] = state_dict[layer_name]
- continue
-
- # otherwise, layer should be pruned.
- original_layer_number = match.group(1)
- # figure out which mapping dict to replace from
- for pruning_pass in pruning_passes:
- if original_layer_number in pruning_pass["mapping_dict"] and pruning_pass[
- "substitution_regex"
- ].search(layer_name):
- new_layer_number = pruning_pass["mapping_dict"][original_layer_number]
- substitution_match = pruning_pass["substitution_regex"].search(
- layer_name
- )
- new_state_key = (
- layer_name[: substitution_match.start(1)]
- + new_layer_number
- + layer_name[substitution_match.end(1) :]
- )
- new_state_dict[new_state_key] = state_dict[layer_name]
-
- # Since layers are now pruned, *_layers_to_keep are no longer needed.
- # This is more of "It would make it work fix" rather than a proper fix.
- if isinstance(model_cfg, DictConfig):
- context = open_dict(model_cfg)
- else:
- context = contextlib.ExitStack()
- with context:
- if hasattr(model_cfg, "encoder_layers_to_keep"):
- model_cfg.encoder_layers_to_keep = None
- if hasattr(model_cfg, "decoder_layers_to_keep"):
- model_cfg.decoder_layers_to_keep = None
-
- return new_state_dict
-
-
-def load_pretrained_component_from_model(
- component: Union[FairseqEncoder, FairseqDecoder], checkpoint: str
-):
- """
- Load a pretrained FairseqEncoder or FairseqDecoder from checkpoint into the
- provided `component` object. If state_dict fails to load, there may be a
- mismatch in the architecture of the corresponding `component` found in the
- `checkpoint` file.
- """
- if not PathManager.exists(checkpoint):
- raise IOError("Model file not found: {}".format(checkpoint))
- state = load_checkpoint_to_cpu(checkpoint)
- if isinstance(component, FairseqEncoder):
- component_type = "encoder"
- elif isinstance(component, FairseqDecoder):
- component_type = "decoder"
- else:
- raise ValueError(
- "component to load must be either a FairseqEncoder or "
- "FairseqDecoder. Loading other component types are not supported."
- )
- component_state_dict = OrderedDict()
- for key in state["model"].keys():
- if key.startswith(component_type):
- # encoder.input_layers.0.0.weight --> input_layers.0.0.weight
- component_subkey = key[len(component_type) + 1 :]
- component_state_dict[component_subkey] = state["model"][key]
- component.load_state_dict(component_state_dict, strict=True)
- return component
-
-
-def verify_checkpoint_directory(save_dir: str) -> None:
- if not os.path.exists(save_dir):
- os.makedirs(save_dir, exist_ok=True)
- temp_file_path = os.path.join(save_dir, "dummy")
- try:
- with open(temp_file_path, "w"):
- pass
- except OSError as e:
- logger.warning(
- "Unable to access checkpoint save directory: {}".format(save_dir)
- )
- raise e
- else:
- os.remove(temp_file_path)
-
-
-def load_ema_from_checkpoint(fpath):
- """Loads exponential moving averaged (EMA) checkpoint from input and
- returns a model with ema weights.
-
- Args:
- fpath: A string path of checkpoint to load from.
-
- Returns:
- A dict of string keys mapping to various values. The 'model' key
- from the returned dict should correspond to an OrderedDict mapping
- string parameter names to torch Tensors.
- """
- params_dict = collections.OrderedDict()
- new_state = None
-
- with PathManager.open(fpath, 'rb') as f:
- new_state = torch.load(
- f,
- map_location=(
- lambda s, _: torch.serialization.default_restore_location(s, 'cpu')
- ),
- )
-
- # EMA model is stored in a separate "extra state"
- model_params = new_state['extra_state']['ema']
-
- for key in list(model_params.keys()):
- p = model_params[key]
- if isinstance(p, torch.HalfTensor):
- p = p.float()
- if key not in params_dict:
- params_dict[key] = p.clone()
- # NOTE: clone() is needed in case of p is a shared parameter
- else:
- raise ValueError("Key {} is repeated in EMA model params.".format(key))
-
- if len(params_dict) == 0:
- raise ValueError(
- f"Input checkpoint path '{fpath}' does not contain "
- "ema model weights, is this model trained with EMA?"
- )
-
- new_state['model'] = params_dict
- return new_state
diff --git a/spaces/srush/gradio_tools/app.py b/spaces/srush/gradio_tools/app.py
deleted file mode 100644
index bc3a826a8c35ae70e6961293f93c246a710c7c87..0000000000000000000000000000000000000000
--- a/spaces/srush/gradio_tools/app.py
+++ /dev/null
@@ -1,50 +0,0 @@
-# + tags=["hide_inp"]
-
-desc = """
-### Gradio Tool
-
-Examples using the gradio tool [](https://colab.research.google.com/github/srush/MiniChain/blob/master/examples/gradio_example.ipynb)
-
-"""
-# -
-
-# $
-
-from minichain import show, prompt, OpenAI, OpenAIStream
-import gradio as gr
-from gradio_tools.tools import StableDiffusionTool, ImageCaptioningTool
-
-@prompt(OpenAIStream(), stream=True)
-def picture(model, query):
- out = ""
- for r in model.stream(query):
- out += r
- yield out
-
-@prompt(StableDiffusionTool(), stream=True, block_input=lambda: gr.Textbox(label=""))
-def gen(model, query):
- for r in model.stream(query):
- yield "https://htmlcolorcodes.com/assets/images/colors/baby-blue-color-solid-background-1920x1080.png"
- yield r
-
-@prompt(ImageCaptioningTool(), block_output=lambda: gr.Textbox(label=""))
-def caption(model, img_src):
- return model(img_src)
-
-def gradio_example(query):
- return caption(gen(picture(query)))
-
-
-# $
-
-gradio = show(gradio_example,
- subprompts=[picture, gen, caption],
- examples=['Describe a one-sentence fantasy scene.',
- 'Describe a one-sentence scene happening on the moon.'],
- out_type="markdown",
- description=desc,
- css="#advanced {display: none}"
- )
-if __name__ == "__main__":
- gradio.queue().launch()
-
diff --git a/spaces/stamps-labs/swp-ui/README.md b/spaces/stamps-labs/swp-ui/README.md
deleted file mode 100644
index 00f561b981ee9f73b9a5572d35bde587421ea483..0000000000000000000000000000000000000000
--- a/spaces/stamps-labs/swp-ui/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Swp Ui
-emoji: 📚
-colorFrom: gray
-colorTo: gray
-sdk: streamlit
-sdk_version: 1.21.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Alphomega Elliott Waves Crack [WORK] For Mac.md b/spaces/stomexserde/gpt4-ui/Examples/Alphomega Elliott Waves Crack [WORK] For Mac.md
deleted file mode 100644
index 246a6469f30db18f73338ca545fe7dec5dd06627..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Alphomega Elliott Waves Crack [WORK] For Mac.md
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
How to Use Alphomega Elliott Waves Crack for Mac
-
Alphomega Elliott Waves is a powerful tool for technical analysis and trading based on the Elliott Wave Theory. It helps you identify the patterns and cycles of the market movements and forecast the future trends. However, this software is not free and requires a license to use.
If you are looking for a way to use Alphomega Elliott Waves without paying for it, you might be tempted to download a crack version from the internet. A crack is a modified version of the software that bypasses the license verification and allows you to use it for free. However, this is not a good idea for several reasons.
-
First of all, downloading and using a crack is illegal and unethical. You are violating the intellectual property rights of the software developer and breaking the terms of service. You could face legal consequences if you are caught.
-
Secondly, downloading and using a crack is risky and unsafe. You never know what kind of malware or viruses are hidden in the crack file. You could expose your computer and your personal data to hackers and cybercriminals. You could also damage your system or lose your files.
-
Thirdly, downloading and using a crack is unreliable and inefficient. You might not get the full functionality or the latest updates of the software. You might encounter errors or bugs that affect your performance or accuracy. You might also miss out on the customer support and the community resources that come with the licensed version.
-
Therefore, we strongly advise you to avoid using Alphomega Elliott Waves crack for Mac or any other crack software. Instead, you should invest in the original and legitimate version of the software that will give you the best results and the most benefits. You can purchase Alphomega Elliott Waves from their official website or from authorized resellers.
-
-
If you are still unsure about whether Alphomega Elliott Waves is worth your money, you can try their free trial version first. This will allow you to test the software and see how it works for you. You can also check out their online tutorials and reviews to learn more about the features and advantages of Alphomega Elliott Waves.
-
Alphomega Elliott Waves is a great tool for traders who want to apply the Elliott Wave Theory to their analysis and strategies. It can help you improve your skills and increase your profits. However, you should always use it legally and responsibly. Do not download or use Alphomega Elliott Waves crack for Mac or any other crack software. It is not worth the risk or the trouble.
-
-
How does Alphomega Elliott Waves work? Alphomega Elliott Waves is a software that integrates with MetaStock, a popular platform for technical analysis and trading. It adds a new menu and toolbar to MetaStock that allows you to access the Alphomega Elliott Waves features. You can use Alphomega Elliott Waves to scan, filter, and sort the stocks or markets that match the Elliott Wave criteria. You can also use Alphomega Elliott Waves to draw and label the Elliott Wave patterns and indicators on the charts. You can customize the settings and parameters of Alphomega Elliott Waves to suit your preferences and style.
-
What are the benefits of using Alphomega Elliott Waves? Alphomega Elliott Waves can help you gain a deeper understanding of the market psychology and behavior. It can help you identify the current phase and direction of the market cycle and anticipate the possible future scenarios. It can help you find the best entry and exit points for your trades and optimize your risk-reward ratio. It can help you enhance your confidence and discipline as a trader and avoid emotional or impulsive decisions. It can help you save time and effort by automating and simplifying your analysis and trading process.
-
Who can use Alphomega Elliott Waves? Alphomega Elliott Waves is suitable for traders of any level of experience and any type of market. Whether you are a beginner or an expert, whether you trade stocks, forex, commodities, or cryptocurrencies, you can benefit from using Alphomega Elliott Waves. However, you should have some basic knowledge of the Elliott Wave Theory and MetaStock before using Alphomega Elliott Waves. You should also be aware of the limitations and challenges of applying the Elliott Wave Theory to real-world markets. You should not rely solely on Alphomega Elliott Waves or any other software for your trading decisions. You should always do your own research and analysis and use your own judgment and common sense.
cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/sunnyzhifei/ChatGPTOnline/run_Windows.bat b/spaces/sunnyzhifei/ChatGPTOnline/run_Windows.bat
deleted file mode 100644
index 4c18f9ccaeea0af972301ffdf48778641221f76d..0000000000000000000000000000000000000000
--- a/spaces/sunnyzhifei/ChatGPTOnline/run_Windows.bat
+++ /dev/null
@@ -1,5 +0,0 @@
-@echo off
-echo Opening ChuanhuChatGPT...
-
-REM Open powershell via bat
-start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py"
diff --git a/spaces/supercyx3/ChatSydney/README.md b/spaces/supercyx3/ChatSydney/README.md
deleted file mode 100644
index e8ecc80484156bcec369ea50f942a2b7e3e350af..0000000000000000000000000000000000000000
--- a/spaces/supercyx3/ChatSydney/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: TestBing
-emoji: 🐢
-colorFrom: yellow
-colorTo: red
-sdk: docker
-pinned: false
-license: mit
-duplicated_from: rr1/test222
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/supertori/files/stable-diffusion-webui/webui.bat b/spaces/supertori/files/stable-diffusion-webui/webui.bat
deleted file mode 100644
index 5139b7eb020139c65fa6390a7078c761301229b0..0000000000000000000000000000000000000000
--- a/spaces/supertori/files/stable-diffusion-webui/webui.bat
+++ /dev/null
@@ -1,85 +0,0 @@
-@echo off
-
-if not defined PYTHON (set PYTHON=python)
-if not defined VENV_DIR (set "VENV_DIR=%~dp0%venv")
-
-
-set ERROR_REPORTING=FALSE
-
-mkdir tmp 2>NUL
-
-%PYTHON% -c "" >tmp/stdout.txt 2>tmp/stderr.txt
-if %ERRORLEVEL% == 0 goto :check_pip
-echo Couldn't launch python
-goto :show_stdout_stderr
-
-:check_pip
-%PYTHON% -mpip --help >tmp/stdout.txt 2>tmp/stderr.txt
-if %ERRORLEVEL% == 0 goto :start_venv
-if "%PIP_INSTALLER_LOCATION%" == "" goto :show_stdout_stderr
-%PYTHON% "%PIP_INSTALLER_LOCATION%" >tmp/stdout.txt 2>tmp/stderr.txt
-if %ERRORLEVEL% == 0 goto :start_venv
-echo Couldn't install pip
-goto :show_stdout_stderr
-
-:start_venv
-if ["%VENV_DIR%"] == ["-"] goto :skip_venv
-if ["%SKIP_VENV%"] == ["1"] goto :skip_venv
-
-dir "%VENV_DIR%\Scripts\Python.exe" >tmp/stdout.txt 2>tmp/stderr.txt
-if %ERRORLEVEL% == 0 goto :activate_venv
-
-for /f "delims=" %%i in ('CALL %PYTHON% -c "import sys; print(sys.executable)"') do set PYTHON_FULLNAME="%%i"
-echo Creating venv in directory %VENV_DIR% using python %PYTHON_FULLNAME%
-%PYTHON_FULLNAME% -m venv "%VENV_DIR%" >tmp/stdout.txt 2>tmp/stderr.txt
-if %ERRORLEVEL% == 0 goto :activate_venv
-echo Unable to create venv in directory "%VENV_DIR%"
-goto :show_stdout_stderr
-
-:activate_venv
-set PYTHON="%VENV_DIR%\Scripts\Python.exe"
-echo venv %PYTHON%
-
-:skip_venv
-if [%ACCELERATE%] == ["True"] goto :accelerate
-goto :launch
-
-:accelerate
-echo Checking for accelerate
-set ACCELERATE="%VENV_DIR%\Scripts\accelerate.exe"
-if EXIST %ACCELERATE% goto :accelerate_launch
-
-:launch
-%PYTHON% launch.py %*
-pause
-exit /b
-
-:accelerate_launch
-echo Accelerating
-%ACCELERATE% launch --num_cpu_threads_per_process=6 launch.py
-pause
-exit /b
-
-:show_stdout_stderr
-
-echo.
-echo exit code: %errorlevel%
-
-for /f %%i in ("tmp\stdout.txt") do set size=%%~zi
-if %size% equ 0 goto :show_stderr
-echo.
-echo stdout:
-type tmp\stdout.txt
-
-:show_stderr
-for /f %%i in ("tmp\stderr.txt") do set size=%%~zi
-if %size% equ 0 goto :show_stderr
-echo.
-echo stderr:
-type tmp\stderr.txt
-
-:endofscript
-
-echo.
-echo Launch unsuccessful. Exiting.
-pause
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Adobe Photoshop CC 2017 V18.0 64Bits Serial Key Keygen !FULL!.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Adobe Photoshop CC 2017 V18.0 64Bits Serial Key Keygen !FULL!.md
deleted file mode 100644
index 3f25bb3032c25049c2f425e4adee020dbf6c6099..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Adobe Photoshop CC 2017 V18.0 64Bits Serial Key Keygen !FULL!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Adobe Photoshop CC 2017 V18.0 64Bits Serial Key Keygen
-
-crack,iri key and main generator with the links we give you under the description. ... Gb) Uptobox Adobe Photoshop CC 2017 v18.0 Multilingual Full Crack (1.3 Gb) ... Adobe Photoshop CC 2017 v18.0 CZ (64 bit) + Crack (1).rar .rar. ... Honor Breakthrough No-cd CrackSimple, However Professional Decorating Tips For Your ... 1fdad05405
-
-
-
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Windows Dlc Xp Media Center 2010 Ultimate Edition Download 2021.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Windows Dlc Xp Media Center 2010 Ultimate Edition Download 2021.md
deleted file mode 100644
index f35d82856b1aa122f981408bce43ad301dea37b9..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Windows Dlc Xp Media Center 2010 Ultimate Edition Download 2021.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Windows Dlc Xp Media Center 2010 Ultimate Edition Download
If you are interested in learning about the mysterious and fascinating world of the Aghoris, a sect of Hindu ascetics who practice extreme rituals and seek enlightenment through unconventional means, then you might want to read Aghor Nagara Vage, a two-volume book by Mohanlal Agrawal.
Aghor Nagara Vage, which means "The City of Aghor", is based on the author's personal experiences and interactions with various Aghoris in India. He narrates his encounters with these extraordinary sadhus, who live in cremation grounds, consume human flesh and intoxicants, perform tantric rites, and possess supernatural powers.
-
The book is not a mere collection of sensational stories, but a sincere attempt to understand the philosophy and psychology of the Aghoris, who defy the norms of society and religion. The author explores their beliefs, practices, goals, and challenges, as well as their views on life, death, and liberation.
-
Aghor Nagara Vage is written in Gujarati language and was first published in 1982 by Navbharat Sahitya Mandir. It has since become a bestseller and a classic in Gujarati literature. It has also been translated into Hindi and English languages.
-
-
If you want to read this book online or download it as a PDF file, you can visit the following websites:
-
-
Goodreads: Here you can find ratings and reviews from other readers, as well as a link to buy the book on Amazon.
-
Open Library: Here you can find details and editions of the book, as well as an option to borrow or download it for free.
-
GujaratiBooks.com: Here you can buy the book in Gujarati language and get it delivered to your address.
-
-
So, if you are curious about the Aghoris and their way of life, don't miss this opportunity to read Aghor Nagara Vage by Mohanlal Agrawal. It will surely open your eyes to a different dimension of reality and spirituality.
-
-
Aghor Nagara Vage: A Journey into the World of Aghoris
-
One of the most intriguing aspects of Aghor Nagara Vage is that it gives the reader a glimpse into the world of the Aghoris, a sect of Hindu ascetics who follow a radical and unconventional path to enlightenment. The Aghoris are not a new phenomenon, but have been around for at least a thousand years, tracing their roots to the KÄpÄlika tradition, a Tantric form of Shaivism that emerged in medieval India. [1] [2]
-
The Aghoris worship Shiva in his fierce form of Bhairava, and his consort Kali, the goddess of death and destruction. They believe that all opposites are ultimately illusory, and that by transcending the conventional boundaries of purity and impurity, good and evil, life and death, they can attain liberation from the cycle of reincarnation. [1] [2] [3]
-
To achieve this goal, the Aghoris practice various rituals that are considered taboo and repulsive by mainstream Hinduism. They live in cremation grounds, where they meditate on corpses and consume human flesh and ashes. They also use human skulls and bones as ornaments and utensils. They smoke marijuana and drink alcohol as part of their worship. They perform sexual acts with corpses or menstruating women in order to harness the power of kundalini. They also seek contact with spirits and ghosts, whom they regard as their teachers and guides. [1] [2] [3]
-
Aghor Nagara Vage: A Testimony of Faith and Courage
-
Aghor Nagara Vage is not a mere sensational account of the Aghori practices, but a testimony of faith and courage by the author, Mohanlal Agrawal. He was not an Aghori himself, but a devout Hindu who was fascinated by their way of life. He spent many years traveling across India, meeting and interviewing various Aghoris, witnessing their rituals, and learning from their wisdom. He also faced many dangers and challenges along the way, such as being attacked by wild animals, threatened by criminals, and harassed by authorities. [4] [5]
-
Through his book, Agrawal wanted to share his insights and experiences with the readers, and to dispel some of the myths and misconceptions about the Aghoris. He also wanted to show that behind their shocking appearance and behavior, there was a profound philosophy and spirituality that deserved respect and admiration. He wrote with honesty, compassion, and curiosity, without judging or sensationalizing the Aghoris. He also wrote with humility, acknowledging his own limitations and doubts. [4] [5]
-
Aghor Nagara Vage is a rare and remarkable book that offers a unique perspective on one of the most mysterious and misunderstood sects of Hinduism. It is a book that challenges the reader to question their own beliefs and prejudices, and to explore the hidden dimensions of reality and spirituality.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/necks/fpn.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/necks/fpn.py
deleted file mode 100644
index a53b2a69500f8c2edb835abc3ff0ccc2173d1fb1..0000000000000000000000000000000000000000
--- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/necks/fpn.py
+++ /dev/null
@@ -1,212 +0,0 @@
-import torch.nn as nn
-import torch.nn.functional as F
-from annotator.uniformer.mmcv.cnn import ConvModule, xavier_init
-
-from ..builder import NECKS
-
-
-@NECKS.register_module()
-class FPN(nn.Module):
- """Feature Pyramid Network.
-
- This is an implementation of - Feature Pyramid Networks for Object
- Detection (https://arxiv.org/abs/1612.03144)
-
- Args:
- in_channels (List[int]): Number of input channels per scale.
- out_channels (int): Number of output channels (used at each scale)
- num_outs (int): Number of output scales.
- start_level (int): Index of the start input backbone level used to
- build the feature pyramid. Default: 0.
- end_level (int): Index of the end input backbone level (exclusive) to
- build the feature pyramid. Default: -1, which means the last level.
- add_extra_convs (bool | str): If bool, it decides whether to add conv
- layers on top of the original feature maps. Default to False.
- If True, its actual mode is specified by `extra_convs_on_inputs`.
- If str, it specifies the source feature map of the extra convs.
- Only the following options are allowed
-
- - 'on_input': Last feat map of neck inputs (i.e. backbone feature).
- - 'on_lateral': Last feature map after lateral convs.
- - 'on_output': The last output feature map after fpn convs.
- extra_convs_on_inputs (bool, deprecated): Whether to apply extra convs
- on the original feature from the backbone. If True,
- it is equivalent to `add_extra_convs='on_input'`. If False, it is
- equivalent to set `add_extra_convs='on_output'`. Default to True.
- relu_before_extra_convs (bool): Whether to apply relu before the extra
- conv. Default: False.
- no_norm_on_lateral (bool): Whether to apply norm on lateral.
- Default: False.
- conv_cfg (dict): Config dict for convolution layer. Default: None.
- norm_cfg (dict): Config dict for normalization layer. Default: None.
- act_cfg (str): Config dict for activation layer in ConvModule.
- Default: None.
- upsample_cfg (dict): Config dict for interpolate layer.
- Default: `dict(mode='nearest')`
-
- Example:
- >>> import torch
- >>> in_channels = [2, 3, 5, 7]
- >>> scales = [340, 170, 84, 43]
- >>> inputs = [torch.rand(1, c, s, s)
- ... for c, s in zip(in_channels, scales)]
- >>> self = FPN(in_channels, 11, len(in_channels)).eval()
- >>> outputs = self.forward(inputs)
- >>> for i in range(len(outputs)):
- ... print(f'outputs[{i}].shape = {outputs[i].shape}')
- outputs[0].shape = torch.Size([1, 11, 340, 340])
- outputs[1].shape = torch.Size([1, 11, 170, 170])
- outputs[2].shape = torch.Size([1, 11, 84, 84])
- outputs[3].shape = torch.Size([1, 11, 43, 43])
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- num_outs,
- start_level=0,
- end_level=-1,
- add_extra_convs=False,
- extra_convs_on_inputs=False,
- relu_before_extra_convs=False,
- no_norm_on_lateral=False,
- conv_cfg=None,
- norm_cfg=None,
- act_cfg=None,
- upsample_cfg=dict(mode='nearest')):
- super(FPN, self).__init__()
- assert isinstance(in_channels, list)
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.num_ins = len(in_channels)
- self.num_outs = num_outs
- self.relu_before_extra_convs = relu_before_extra_convs
- self.no_norm_on_lateral = no_norm_on_lateral
- self.fp16_enabled = False
- self.upsample_cfg = upsample_cfg.copy()
-
- if end_level == -1:
- self.backbone_end_level = self.num_ins
- assert num_outs >= self.num_ins - start_level
- else:
- # if end_level < inputs, no extra level is allowed
- self.backbone_end_level = end_level
- assert end_level <= len(in_channels)
- assert num_outs == end_level - start_level
- self.start_level = start_level
- self.end_level = end_level
- self.add_extra_convs = add_extra_convs
- assert isinstance(add_extra_convs, (str, bool))
- if isinstance(add_extra_convs, str):
- # Extra_convs_source choices: 'on_input', 'on_lateral', 'on_output'
- assert add_extra_convs in ('on_input', 'on_lateral', 'on_output')
- elif add_extra_convs: # True
- if extra_convs_on_inputs:
- # For compatibility with previous release
- # TODO: deprecate `extra_convs_on_inputs`
- self.add_extra_convs = 'on_input'
- else:
- self.add_extra_convs = 'on_output'
-
- self.lateral_convs = nn.ModuleList()
- self.fpn_convs = nn.ModuleList()
-
- for i in range(self.start_level, self.backbone_end_level):
- l_conv = ConvModule(
- in_channels[i],
- out_channels,
- 1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg if not self.no_norm_on_lateral else None,
- act_cfg=act_cfg,
- inplace=False)
- fpn_conv = ConvModule(
- out_channels,
- out_channels,
- 3,
- padding=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- inplace=False)
-
- self.lateral_convs.append(l_conv)
- self.fpn_convs.append(fpn_conv)
-
- # add extra conv layers (e.g., RetinaNet)
- extra_levels = num_outs - self.backbone_end_level + self.start_level
- if self.add_extra_convs and extra_levels >= 1:
- for i in range(extra_levels):
- if i == 0 and self.add_extra_convs == 'on_input':
- in_channels = self.in_channels[self.backbone_end_level - 1]
- else:
- in_channels = out_channels
- extra_fpn_conv = ConvModule(
- in_channels,
- out_channels,
- 3,
- stride=2,
- padding=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- inplace=False)
- self.fpn_convs.append(extra_fpn_conv)
-
- # default init_weights for conv(msra) and norm in ConvModule
- def init_weights(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- xavier_init(m, distribution='uniform')
-
- def forward(self, inputs):
- assert len(inputs) == len(self.in_channels)
-
- # build laterals
- laterals = [
- lateral_conv(inputs[i + self.start_level])
- for i, lateral_conv in enumerate(self.lateral_convs)
- ]
-
- # build top-down path
- used_backbone_levels = len(laterals)
- for i in range(used_backbone_levels - 1, 0, -1):
- # In some cases, fixing `scale factor` (e.g. 2) is preferred, but
- # it cannot co-exist with `size` in `F.interpolate`.
- if 'scale_factor' in self.upsample_cfg:
- laterals[i - 1] += F.interpolate(laterals[i],
- **self.upsample_cfg)
- else:
- prev_shape = laterals[i - 1].shape[2:]
- laterals[i - 1] += F.interpolate(
- laterals[i], size=prev_shape, **self.upsample_cfg)
-
- # build outputs
- # part 1: from original levels
- outs = [
- self.fpn_convs[i](laterals[i]) for i in range(used_backbone_levels)
- ]
- # part 2: add extra levels
- if self.num_outs > len(outs):
- # use max pool to get more levels on top of outputs
- # (e.g., Faster R-CNN, Mask R-CNN)
- if not self.add_extra_convs:
- for i in range(self.num_outs - used_backbone_levels):
- outs.append(F.max_pool2d(outs[-1], 1, stride=2))
- # add conv layers on top of original feature maps (RetinaNet)
- else:
- if self.add_extra_convs == 'on_input':
- extra_source = inputs[self.backbone_end_level - 1]
- elif self.add_extra_convs == 'on_lateral':
- extra_source = laterals[-1]
- elif self.add_extra_convs == 'on_output':
- extra_source = outs[-1]
- else:
- raise NotImplementedError
- outs.append(self.fpn_convs[used_backbone_levels](extra_source))
- for i in range(used_backbone_levels + 1, self.num_outs):
- if self.relu_before_extra_convs:
- outs.append(self.fpn_convs[i](F.relu(outs[-1])))
- else:
- outs.append(self.fpn_convs[i](outs[-1]))
- return tuple(outs)
diff --git a/spaces/syaz01/rvc-anigames-v2/lib/infer_pack/modules.py b/spaces/syaz01/rvc-anigames-v2/lib/infer_pack/modules.py
deleted file mode 100644
index c83289df7c79a4810dacd15c050148544ba0b6a9..0000000000000000000000000000000000000000
--- a/spaces/syaz01/rvc-anigames-v2/lib/infer_pack/modules.py
+++ /dev/null
@@ -1,522 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-from lib.infer_pack import commons
-from lib.infer_pack.commons import init_weights, get_padding
-from lib.infer_pack.transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(
- self,
- in_channels,
- hidden_channels,
- out_channels,
- kernel_size,
- n_layers,
- p_dropout,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(
- nn.Conv1d(
- in_channels, hidden_channels, kernel_size, padding=kernel_size // 2
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout))
- for _ in range(n_layers - 1):
- self.conv_layers.append(
- nn.Conv1d(
- hidden_channels,
- hidden_channels,
- kernel_size,
- padding=kernel_size // 2,
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
-
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size**i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(
- nn.Conv1d(
- channels,
- channels,
- kernel_size,
- groups=channels,
- dilation=dilation,
- padding=padding,
- )
- )
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(
- self,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- p_dropout=0,
- ):
- super(WN, self).__init__()
- assert kernel_size % 2 == 1
- self.hidden_channels = hidden_channels
- self.kernel_size = (kernel_size,)
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(
- gin_channels, 2 * hidden_channels * n_layers, 1
- )
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight")
-
- for i in range(n_layers):
- dilation = dilation_rate**i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(
- hidden_channels,
- 2 * hidden_channels,
- kernel_size,
- dilation=dilation,
- padding=padding,
- )
- in_layer = torch.nn.utils.weight_norm(in_layer, name="weight")
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight")
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:, : self.hidden_channels, :]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:, self.hidden_channels :, :]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2]),
- )
- ),
- ]
- )
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- ]
- )
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- ]
- )
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels, 1))
- self.logs = nn.Parameter(torch.zeros(channels, 1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1, 2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False,
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=p_dropout,
- gin_channels=gin_channels,
- )
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels] * 2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1, 2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class ConvFlow(nn.Module):
- def __init__(
- self,
- in_channels,
- filter_channels,
- kernel_size,
- n_layers,
- num_bins=10,
- tail_bound=5.0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0)
- self.proj = nn.Conv1d(
- filter_channels, self.half_channels * (num_bins * 3 - 1), 1
- )
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt(
- self.filter_channels
- )
- unnormalized_derivatives = h[..., 2 * self.num_bins :]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(
- x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails="linear",
- tail_bound=self.tail_bound,
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1, 2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/takuuuuuuu/stabilityai-stable-diffusion-xl-base-1.0/README.md b/spaces/takuuuuuuu/stabilityai-stable-diffusion-xl-base-1.0/README.md
deleted file mode 100644
index 9a1ae89d4b5e7ac3e9bbae8e4b160e8a1db22022..0000000000000000000000000000000000000000
--- a/spaces/takuuuuuuu/stabilityai-stable-diffusion-xl-base-1.0/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Stabilityai Stable Diffusion Xl Base 1.0
-emoji: 📉
-colorFrom: indigo
-colorTo: purple
-sdk: streamlit
-sdk_version: 1.26.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/terfces0erbo/CollegeProjectV2/3d Master Kit Crack Free Download.md b/spaces/terfces0erbo/CollegeProjectV2/3d Master Kit Crack Free Download.md
deleted file mode 100644
index ded398497165ac717cc216cb17ae1cc86e4b921f..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/3d Master Kit Crack Free Download.md
+++ /dev/null
@@ -1,86 +0,0 @@
-
-
3D Master Kit Crack Free Download: A Complete Guide
-
If you are looking for a way to create stunning lenticular images with realistic motion and 3D effects, you might be interested in 3D Master Kit Crack Free Download. This is a powerful software that allows you to convert any 2D image into a 3D lenticular image, or create a 3D image from scratch. You can also use it to make animations, flip images, morphing images, and zoom images. In this article, we will show you how to download and install 3D Master Kit Crack Free Download, and how to use its features to create amazing 3D images.
3D Master Kit Crack Free Download is a cracked version of 3D Master Kit, a software developed by Triaxes, a company that specializes in 3D technologies. 3D Master Kit is designed for lenticular printing and 3D conversion. Lenticular printing is a technique that uses special lenses to create an illusion of depth or motion on a flat surface. 3D conversion is a process that transforms a 2D image into a 3D image by adding depth information.
-
With 3D Master Kit Crack Free Download, you can create lenticular images without paying for the license or activation of the software. You can also use it without any time limit or logo overlay on the output image. However, using a cracked software may have some risks, such as viruses, malware, or legal issues. Therefore, we do not recommend using 3D Master Kit Crack Free Download for commercial purposes or without the permission of the original developer.
-
How to Download and Install 3D Master Kit Crack Free Download?
-
To download and install 3D Master Kit Crack Free Download, you need to follow these steps:
Select the version of the software that matches your operating system (Windows or Mac) and your system architecture (32-bit or 64-bit).
-
Click on the download link and save the file to your computer.
-
Extract the file using a program like WinRAR or 7-Zip.
-
Run the setup file and follow the instructions to install the software.
-
Copy the crack file from the extracted folder and paste it into the installation directory of the software.
-
Launch the software and enjoy creating lenticular images.
-
-
How to Use 3D Master Kit Crack Free Download?
-
To use 3D Master Kit Crack Free Download, you need to have some basic knowledge of lenticular printing and 3D conversion. You also need to have some source images that you want to convert into lenticular images. Here are some steps to help you get started:
-
-
-
Open the software and select the project type that suits your needs. You can choose from Animation, Flip, Morphing, Zoom, or Multi-view.
-
Add your source images to the project by clicking on the Add button or dragging and dropping them into the workspace.
-
Edit your source images using the tools provided by the software. You can crop, rotate, resize, align, adjust colors, add masks, and more.
-
Generate the interlaced image by clicking on the Interlace button. This will create a single image that contains multiple frames of your source images arranged in a specific order.
-
Preview your interlaced image by clicking on the Preview button. You can see how your lenticular image will look like when viewed from different angles.
-
Save your interlaced image by clicking on the Save button. You can choose from various formats, such as BMP, JPG, PNG, TIFF, PSD, etc.
-
Print your interlaced image using a printer that supports lenticular printing. You also need to have some lenticular sheets that match the size and resolution of your interlaced image.
-
Laminate your interlaced image with the lenticular sheet using a laminator or an adhesive. Make sure to align them properly and avoid air bubbles.
-
Cut your lenticular image using a cutter or a scissors. Be careful not to damage the lenticular sheet or the interlaced image.
-
Enjoy your lenticular image by viewing it from different angles and seeing its motion or depth effects.
-
-
Conclusion
-
3D Master Kit Crack Free Download is a software that allows you to create lenticular images with realistic motion and 3D effects. You can use it to convert any 2D image into a 3D lenticular image, or create a 3D image from scratch. You can also use it to make animations, flip images, morphing images, and zoom images. However, using a cracked software may have some risks, such as viruses, malware, or legal issues. Therefore, we do not recommend using 3D Master Kit Crack Free Download for commercial purposes or without the permission of the original developer.
-
What are the Benefits of 3D Master Kit Crack Free Download?
-
3D Master Kit Crack Free Download has many benefits for anyone who wants to create lenticular images with realistic motion and 3D effects. Some of these benefits are:
-
-
It is easy to use and has a user-friendly interface. You can create lenticular images in a few simple steps, without any special skills or knowledge.
-
It is versatile and supports various types of lenticular effects, such as animation, flip, morphing, zoom, and multi-view. You can also create 3D images from 2D images or from scratch.
-
It is compatible with various image formats, such as BMP, JPG, PNG, TIFF, PSD, etc. You can also import and export PSD files with layers.
-
It is fast and efficient. You can generate high-quality interlaced images in a short time, and preview them before printing.
-
It is cost-effective. You can save money by using a cracked version of the software, instead of paying for the license or activation.
-
-
What are the Drawbacks of 3D Master Kit Crack Free Download?
-
3D Master Kit Crack Free Download also has some drawbacks that you should be aware of before using it. Some of these drawbacks are:
-
-
It is illegal and unethical. Using a cracked software violates the intellectual property rights of the original developer, and may result in legal consequences or penalties.
-
It is risky and unsafe. Using a cracked software may expose your computer to viruses, malware, or other harmful programs that may damage your system or compromise your data.
-
It is unreliable and unsupported. Using a cracked software may cause errors, bugs, or crashes that may affect your work or output. You also cannot get any updates, patches, or technical support from the original developer.
-
-
How to Get the Original Version of 3D Master Kit?
-
If you want to use the original version of 3D Master Kit, you need to purchase a license from the official website of Triaxes. You can choose from different editions of the software, depending on your needs and budget. The prices range from $69 to $999. You can also get a free trial version of the software for 30 days, with some limitations on the size and resolution of the output image.
-
To purchase a license of 3D Master Kit, you need to follow these steps:
Select the edition of the software that suits your needs and click on the Add to Cart button.
-
Fill in your billing information and payment method and click on the Place Order button.
-
Check your email for the confirmation and activation code of your purchase.
-
Download and install the software from the link provided in the email.
-
Enter your activation code in the software and enjoy creating lenticular images legally and safely.
-
-
What are the Features of 3D Master Kit Crack Free Download?
-
3D Master Kit Crack Free Download has many features that make it a powerful and versatile software for creating lenticular images. Some of these features are:
-
-
It supports various types of lenticular effects, such as animation, flip, morphing, zoom, and multi-view. You can create lenticular images with realistic motion and 3D effects that change depending on the viewing angle.
-
It supports various types of source images, such as 2D images, 3D images, stereo pairs, video frames, etc. You can convert any 2D image into a 3D lenticular image, or create a 3D image from scratch using the built-in editor.
-
It supports various types of output formats, such as BMP, JPG, PNG, TIFF, PSD, etc. You can also export your interlaced image as a PSD file with layers.
-
It has a user-friendly interface and a simple workflow. You can create lenticular images in a few simple steps, without any special skills or knowledge.
-
It has a preview function that allows you to see how your lenticular image will look like when viewed from different angles. You can also adjust the parameters of your interlaced image to optimize its quality and performance.
-
It has a batch processing function that allows you to process multiple projects at once. You can save time and effort by creating multiple lenticular images with different effects and settings.
-
-
What are the Tips and Tricks for Using 3D Master Kit Crack Free Download?
-
3D Master Kit Crack Free Download is a software that requires some tips and tricks to use it effectively and efficiently. Here are some tips and tricks that can help you create better lenticular images:
-
-
Choose the right type of lenticular effect for your project. Different types of lenticular effects have different requirements and limitations. For example, animation requires more frames than flip or morphing, but has less depth than multi-view.
-
Choose the right type of source images for your project. Different types of source images have different advantages and disadvantages. For example, 2D images are easier to find and edit than 3D images, but have less depth and realism than 3D images.
-
Choose the right type of output format for your project. Different types of output formats have different qualities and compatibilities. For example, BMP has higher quality than JPG, but has larger file size than JPG.
-
Edit your source images before adding them to the project. You can use a third-party editor (such as GIMP or Photoshop) to crop, rotate, resize, align, adjust colors, add masks, and more. This will improve the quality and consistency of your source images.
-
Align your source images properly in the project. You can use the tools provided by the software to align your source images according to the depth and perspective. This will improve the smoothness and realism of your lenticular image.
-
Adjust the parameters of your interlaced image according to your needs. You can use the tools provided by the software to adjust the parameters such as pitch, angle, resolution, size, etc. This will optimize the quality and performance of your interlaced image.
-
-
Conclusion
-
3D Master Kit Crack Free Download is a software that allows you to create lenticular images with realistic motion and 3D effects. You can use it to convert any 2D image into a 3D lenticular image, or create a 3D image from scratch. You can also use it to make animations, flip images, morphing images, and zoom images. However, using a cracked software may have some risks, such as viruses, malware, or legal issues. Therefore, we do not recommend using 3D Master Kit Crack Free Download for commercial purposes or without the permission of the original developer. If you want to use the original version of 3D Master Kit, you need to purchase a license from the official website of Triaxes. You can also get a free trial version of the software for 30 days, with some limitations on the size and resolution of the output image.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Atheros Security Ndis 60 Filter Driver Download WORK.md b/spaces/terfces0erbo/CollegeProjectV2/Atheros Security Ndis 60 Filter Driver Download WORK.md
deleted file mode 100644
index 4258d16c5106d415bcc04d4f3340c35ec4d4a4fc..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Atheros Security Ndis 60 Filter Driver Download WORK.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-McAfee Endpoint Security Firewall (ENS) 10.5.x, 10.6.x McAfee Host Intrusion ... When Host IPS is installed, the insertion of the firewall's NDIS driver causes ... NDIS 6.0 filter drivers can be dynamically inserted into or removed from a ... Get the McAfee Enterprise Support app on Google Play Download the ... 4d29de3e1b
-
-
-
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Full ((BETTER)) Universal Remote MX-900 Editor.md b/spaces/terfces0erbo/CollegeProjectV2/Full ((BETTER)) Universal Remote MX-900 Editor.md
deleted file mode 100644
index 5e785e0a6cfd3a6fbe663b45dc40e7d4db48d8f5..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Full ((BETTER)) Universal Remote MX-900 Editor.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
-Tried loading a blank program into the remote to reset, then built a blank template into the remote for 2 devices, just the TV & cable. They did ... I downloaded the video from my video, and I looked at it.
-He asked me to download a blank template. I did that, and it asked me to plug in the cables, I plugged them in, and then I tried to download the video again. It didn't want to load the video.
-Then I tried to upload a blank video from another cell phone. It also wouldn't load the video.
-I looked at it from the other end, and I noticed that there was only one device, and then I realized that the whole device was connected to the TV. 8a78ff9644
-
-
-
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Geek Squad Mri 5.7.1 Cracked LINK.md b/spaces/terfces0erbo/CollegeProjectV2/Geek Squad Mri 5.7.1 Cracked LINK.md
deleted file mode 100644
index 8eb1d4903f20751edb88484269cc19a827334965..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Geek Squad Mri 5.7.1 Cracked LINK.md
+++ /dev/null
@@ -1,68 +0,0 @@
-
-
Geek Squad Mri 5.7.1 Cracked: The Ultimate Repair Tool for Windows
-
-
Geek Squad Mri 5.7.1 Cracked is a version of the Geek Squad MRI tool that has been modified and distributed by a group of hackers called SOLDIERX. The Geek Squad MRI tool is a repair disc that is used by the Geek Squad technicians to fix computers. It has various features such as antivirus, antispyware, disk cleaner, process list, winsock fix, and more. The Geek Squad MRI tool is supposed to be for internal use only, confidential, and a trade secret. However, SOLDIERX has cracked the tool since version 4.8.1 and replaced the Geek Squad propaganda with their own. The latest public release by SOLDIERX is 5.1.1.0.
Geek Squad Mri 5.7.1 Cracked is a modified version of the Geek Squad MRI tool that has been cracked and distributed by SOLDIERX. The original Geek Squad MRI tool is a repair disc that is used by the Geek Squad technicians to fix computers that have various issues with Windows, such as spyware, viruses, corrupted files, slow performance, etc. The tool has a graphical user interface that makes it easy to use and navigate. The tool also has a logo that represents the Geek Squad brand and service.
-
-
However, Geek Squad Mri 5.7.1 Cracked is different from the original tool in several ways:
-
-
-
It has been cracked by SOLDIERX, which means that it can be used without the authorization or permission of Best Buy, the owner of Geek Squad.
-
It has been modified by SOLDIERX, which means that it has some features or functions that are different from or not present in the original tool.
-
It has been distributed by SOLDIERX, which means that it can be downloaded from various torrent sites or forums that host the file.
-
It has a logo that represents SOLDIERX instead of Geek Squad.
-
-
-
Why would you use Geek Squad Mri 5.7.1 Cracked?
-
-
Some people may use Geek Squad Mri 5.7.1 Cracked for various reasons, such as:
-
-
-
They want to fix their own computers without paying for the Geek Squad service.
-
They want to learn how the Geek Squad MRI tool works and what it can do.
-
They want to have access to a powerful and versatile repair tool that can handle various issues with Windows.
-
-
-
However, using Geek Squad Mri 5.7.1 Cracked also comes with some risks and drawbacks, such as:
-
-
-
It may be illegal to use the tool without the permission of Best Buy, the owner of Geek Squad.
-
It may be unethical to use the tool that is supposed to be a trade secret of Geek Squad.
-
It may be unsafe to use the tool that has been cracked and modified by unknown hackers.
-
It may be unreliable to use the tool that may not be updated or supported by Geek Squad or SOLDIERX.
-
-
-
How can you download and use Geek Squad Mri 5.7.1 Cracked?
-
-
If you still want to download and use Geek Squad Mri 5.7.1 Cracked, you can do so at your own risk and responsibility. Here are some steps you can follow:
Download the ISO file using a torrent client such as uTorrent or BitTorrent.
-
Burn the ISO file to a DVD disc using a software such as ImgBurn or Nero.
-
Boot your computer from the DVD disc and follow the instructions on the screen.
-
-
-
Conclusion
-
-
Geek Squad Mri 5.7.1 Cracked is a version of the Geek Squad MRI tool that has been modified and distributed by a group of hackers called SOLDIERX. The Geek Squad MRI tool is a repair disc that is used by the Geek Squad technicians to fix computers. It has various features such as antivirus, antispyware, disk cleaner, process list, winsock fix, and more. The Geek Squad MRI tool is supposed to be for internal use only, confidential, and a trade secret. However, SOLDIERX has cracked the tool since version 4.8.1 and replaced the Geek Squad propaganda with their own. The latest public release by SOLDIERX is 5.1.1.0.
-
-
Some people may use Geek Squad Mri 5.7.1 Cracked for various reasons, such as fixing their own computers, learning how the tool works, or having access to a powerful repair tool. However, using Geek Squad Mri 5.7.1 Cracked also comes with some risks and drawbacks, such as being illegal, unethical, unsafe, or unreliable.
-
-
If you still want to download and use Geek Squad Mri 5.7.1 Cracked, you can do so at your own risk and responsibility by going to one of the torrent sites that host the file, downloading the ISO file using a torrent client, burning the ISO file to a DVD disc using a software, and booting your computer from the DVD disc.
-
-
If you are looking for a reliable and safe repair tool for your computer, you may want to consider other alternatives such as Windows Repair Toolbox or Hiren's BootCD PE.
-
Geek Squad Mri 5.7.1 Cracked is a version of the Geek Squad MRI tool that has been modified and distributed by a group of hackers called SOLDIERX. The Geek Squad MRI tool is a repair disc that is used by the Geek Squad technicians to fix computers. It has various features such as antivirus, antispyware, disk cleaner, process list, winsock fix, and more. The Geek Squad MRI tool is supposed to be for internal use only, confidential, and a trade secret. However, SOLDIERX has cracked the tool since version 4.8.1 and replaced the Geek Squad propaganda with their own. The latest public release by SOLDIERX is 5.1.1.0.
-
-
Some people may use Geek Squad Mri 5.7.1 Cracked for various reasons, such as fixing their own computers, learning how the tool works, or having access to a powerful repair tool. However, using Geek Squad Mri 5.7.1 Cracked also comes with some risks and drawbacks, such as being illegal, unethical, unsafe, or unreliable.
-
-
If you still want to download and use Geek Squad Mri 5.7.1 Cracked, you can do so at your own risk and responsibility by going to one of the torrent sites that host the file, downloading the ISO file using a torrent client, burning the ISO file to a DVD disc using a software, and booting your computer from the DVD disc.
-
-
If you are looking for a reliable and safe repair tool for your computer, you may want to consider other alternatives such as Windows Repair Toolbox or Hiren's BootCD PE.
How to Crack Adobe CS6 Master Collection with X-Force Keygen
-
Adobe CS6 Master Collection is a suite of creative software that includes Photoshop, Illustrator, InDesign, Premiere Pro, After Effects, Dreamweaver, Flash, and more. It allows you to create stunning designs, graphics, videos, websites, and applications for various platforms and devices. However, it is not free and requires a valid serial number and activation code to use it.
If you want to crack Adobe CS6 Master Collection with X-Force Keygen, you need to follow these steps:
-
-
Disable your network card or pull the network cable out. And make sure you don't have any of these entries in your hosts file: 127.0.0.1 lmlicenses.wip4.adobe.com 127.0.0.1 lm.licenses.adobe.com
-
Install the Master Collection CS6 with a serial generated from the X-Force Keygen (do not close the keygen!). When the error "Please connect to the internet and retry" shows click connect later.
-
Launch an Adobe application (Photoshop, Illustrator, etc.). Confirm you have "connection problem" and you want it to activate offline.
-
A request code will be generated. Use it with the serial you used to install Adobe to generate your activation code.
-
Validate it of course.
-
When installation is finished, run the disable_activation.cmd (double click on it) (in Vista or Win7, run it as admin if you have uac enabled) or do it manually by adding these lines to the bottom of your hosts file: # Adobe Blocker 127.0.0.1 lmlicenses.wip4.adobe.com 127.0.0.1 lm.licenses.adobe.com
-
After it has been activated, re-enable your network card and run the Adobe updater to update your software to the latest version.
-
Enjoy!
-
-
Note: If you encounter any issues with a previous installation or crack, please uninstall Master Collection and delete these folders: C:\\Program Files (x86)\\Common Files\\Adobe\\SLCache C:\\ProgramData\\Adobe\\SLStore
-
For Mac OSX, the steps are similar but you need to run the disable_activation_osx as root and use a different keygen.
-
This method is based on the instructions from [^1^] [^2^] [^3^] [^4^] [^5^]. However, cracking Adobe CS6 Master Collection is illegal and may cause security risks or damage to your system. I do not condone or endorse this practice and I recommend you to purchase a legitimate license from Adobe instead.
-
Adobe CS6 Master Collection offers a comprehensive set of tools for creative professionals who want to work across different media and platforms. It includes the following applications:
-
-
Photoshop CS6: The industry-standard software for image editing and manipulation. It allows you to create and enhance photos, illustrations, 3D artwork, and more.
-
Illustrator CS6: The vector graphics software for creating logos, icons, sketches, typography, and complex illustrations. It lets you design with precision and control.
-
InDesign CS6: The page layout and design software for print and digital publishing. It enables you to create stunning layouts for magazines, books, flyers, brochures, and more.
-
Premiere Pro CS6: The video editing software for producing high-quality videos for film, TV, and web. It offers a streamlined workflow and powerful editing features.
-
After Effects CS6: The motion graphics and visual effects software for creating cinematic animations and effects. It allows you to add realism and style to your videos.
-
Dreamweaver CS6: The web design and development software for creating responsive websites and applications. It supports HTML5, CSS3, JavaScript, PHP, and more.
-
Flash Professional CS6: The interactive media authoring software for creating engaging animations and games. It supports ActionScript 3.0, AIR, and mobile platforms.
-
And more: Adobe CS6 Master Collection also includes Acrobat X Pro, Audition CS6, Bridge CS6, Encore CS6, Fireworks CS6, Media Encoder CS6, Prelude CS6, SpeedGrade CS6, and Story Plus.
-
-
With Adobe CS6 Master Collection, you can unleash your creativity and deliver amazing results across different media and devices. You can also integrate your work with other Adobe products and services such as Adobe Creative Cloud, Adobe Touch Apps, Adobe Digital Publishing Suite, and more.
7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Clap For Finding Phone Mod Unlock All ((FREE)).md b/spaces/tialenAdioni/chat-gpt-api/logs/Clap For Finding Phone Mod Unlock All ((FREE)).md
deleted file mode 100644
index 335b94993f70d82b25c2330e635798ce710ca775..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Clap For Finding Phone Mod Unlock All ((FREE)).md
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
Clap For Finding Phone Mod Unlock All: A Handy Tool to Locate Your Lost Phone
-
Have you ever misplaced your phone and spent hours looking for it? Have you ever wished you could just clap your hands and make your phone ring? If so, you might be interested in Clap For Finding Phone Mod Unlock All, a premium version of the popular app Clap To Find My Phone.
-
Clap To Find My Phone is an app that helps you find your phone by detecting the sound of your clapping. It works even if your phone is on silent mode or has no GPS. You can customize the sensitivity, ringtone, volume, and vibration of the app to suit your preferences. You can also use other features such as flashlight on call, flash alerts on notification and SMS, SMS and caller name talker, call blocking, battery level alert, and PIN protection.
Clap For Finding Phone Mod Unlock All is a modded version of the app that unlocks all the premium features for free. You can enjoy the full functionality of the app without any ads or limitations. You can also access the menu mod that allows you to enable or disable any feature you want.
-
If you want to download Clap For Finding Phone Mod Unlock All, you can find it on various websites that offer modded APK files. However, you should be careful about the source and the security of the file. Some modded APK files may contain viruses or malware that can harm your device or steal your data. You should always scan the file before installing it and check the permissions it requires.
-
Clap For Finding Phone Mod Unlock All is a useful tool for anyone who often loses their phone or wants to have more control over their phone settings. It is easy to use and effective in finding your phone by clapping. However, you should be aware of the risks involved in downloading and installing modded APK files from unknown sources. You should also respect the original developers of the app and support them if you like their work.
-
-
How to use Clap For Finding Phone Mod Unlock All? Using this app is very simple and convenient. You just need to follow these steps:
-
-
Launch the app and click on the Find My Phone button to enable this feature.
-
Enable the toggle button and adjust the sound frequency, notification, and flash blink speed in Settings.
-
Choose a tone from the default options or select one from your phone storage.
-
The frequency or sensitivity your phone detects is based on the environment which you can set from 1 to 10.
-
You can toggle the flash on or off or set the interval time to vary between 50 to 1500 ms.
-
Clap three times with a 2-second interval and your phone will start ringing and flashing.
-
-
What are the benefits of Clap For Finding Phone Mod Unlock All? This app has many benefits that make it worth downloading and using. Some of them are:
-
-
It saves you time and effort in finding your phone by clapping instead of searching everywhere.
-
It works even if your phone is on silent mode or has no GPS signal.
-
It offers various other features such as flashlight on call, flash alert on notification and SMS, SMS and caller name talker, call blocking, battery level alert, and PIN protection.
-
It allows you to access the menu mod that lets you enable or disable any feature you want.
-
It unlocks all the premium features for free without any ads or limitations.
-
-
Clap For Finding Phone Mod Unlock All is a handy tool to locate your lost phone by clapping. It is easy to use, convenient, and effective. It also provides many other useful features that enhance your phone experience. However, you should be careful about downloading and installing modded APK files from unknown sources as they may contain viruses or malware. You should also respect the original developers of the app and support them if you like their work.
e93f5a0c3f
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Digital LimitedPack [OST Art Book] [ ] (Touhou Genso Wanderer -Reloaded-) Download !!TOP!! Fo.md b/spaces/tialenAdioni/chat-gpt-api/logs/Digital LimitedPack [OST Art Book] [ ] (Touhou Genso Wanderer -Reloaded-) Download !!TOP!! Fo.md
deleted file mode 100644
index ebb0a2163f885178cf6a59003ed08f512c13fda5..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Digital LimitedPack [OST Art Book] [ ] (Touhou Genso Wanderer -Reloaded-) Download !!TOP!! Fo.md
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
How to Download the Digital LimitedPack [OST & Art Book] - [ ] (Touhou Genso Wanderer -Reloaded-)
-
If you are a fan of the Touhou Project series, you might be interested in getting the Digital LimitedPack [OST & Art Book] - [ ] (Touhou Genso Wanderer -Reloaded-), a special edition of the roguelike RPG game that features the original soundtrack and a digital art book. Here is how you can download it for your PC or console.
-
-
First, you need to purchase the game Touhou Genso Wanderer -Reloaded- from Steam, PlayStation Store, or Nintendo eShop. The game is available for Windows, PlayStation 4, and Nintendo Switch.
-
Next, you need to access the DLC page of the game on the platform you bought it from. You can find it by searching for "Touhou Genso Wanderer -Reloaded- DLC" or by following the links below:
Then, you need to buy the Digital LimitedPack [OST & Art Book] - [ ] (Touhou Genso Wanderer -Reloaded-) DLC for $9.99. This will give you access to download the OST and the art book in digital format.
-
Finally, you need to download and install the DLC on your device. You can find it in your library or download list on the platform you bought it from. The OST will be in MP3 format and the art book will be in PDF format. You can enjoy them on your PC or console, or transfer them to other devices.
-
-
That's it! You have successfully downloaded the Digital LimitedPack [OST & Art Book] - [ ] (Touhou Genso Wanderer -Reloaded-). Enjoy the music and the artwork of this amazing game!
-
Digital LimitedPack [OST Art Book] [ ] (Touhou Genso Wanderer -Reloaded-) download fo
If you are wondering whether Touhou Genso Wanderer -Reloaded- is worth playing, you might want to check out some of the reviews from critics and players. The game has received mostly positive feedback for its gameplay, graphics, music, and content. Here are some of the highlights:
-
-
The game is based on the Touhou Project world, a popular series of bullet hell shoot 'em up games created by ZUN. You can play as various Touhou characters, each with their own abilities and personalities. You can also recruit other characters as partners and switch between them during battles.
-
The game is a roguelike RPG, which means that the dungeons are randomly generated and you lose all your items and levels if you die. However, the game also has some features that make it easier for beginners, such as the ability to store items in a warehouse, use items to escape from dungeons, and revive with a certain amount of money.
-
The game has tons of advanced options for pros, such as different difficulty modes, challenge quests, unlockable characters, hidden dungeons, and more. The game also has a lot of replay value, as you can explore different routes and endings depending on your choices.
-
The game has a charming pixel art style that captures the essence of the Touhou Project world. The animations are smooth and colorful, and the character portraits are expressive and detailed. The game also has a great soundtrack that features remixes of classic Touhou songs and original compositions.
-
The game has a lot of content to enjoy, as it includes the original Touhou Genso Wanderer game and all its DLCs, plus new stories and scenarios that expand the lore and characters of the Touhou Project world. The game also has a digital art book that showcases the artwork of the game and its creators.
-
-
In conclusion, Touhou Genso Wanderer -Reloaded- is a great game for fans of the Touhou Project series and roguelike RPGs. It offers a fun and challenging gameplay experience that can be customized to your preferences. It also has a lot of charm and personality that will make you fall in love with the Touhou Project world.
e753bf7129
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Immersive Explorer Full Version A File Explorer That Focuses on the Content.md b/spaces/tialenAdioni/chat-gpt-api/logs/Immersive Explorer Full Version A File Explorer That Focuses on the Content.md
deleted file mode 100644
index 4d0e1a28c323aa18dda66575d8f5ae2da0843408..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Immersive Explorer Full Version A File Explorer That Focuses on the Content.md
+++ /dev/null
@@ -1,26 +0,0 @@
-
-
Immersive Explorer: A Modern File Manager for Windows
-
If you are looking for a file manager that can give you a fresh and modern experience on your Windows PC, you might want to check out Immersive Explorer. Immersive Explorer is a free and portable software that lets you explore your files and folders using the metro style from Windows 8. It is not a replacement for Windows Explorer, but rather an alternative way to view, organize and manage your files.
-
Immersive Explorer has many features that make it a powerful and versatile file manager. Some of these features are:
Compression: You can compress files for storage and sharing using various formats such as ZIP, RAR, 7Z and more.
-
Conversion: You can convert files between different formats such as images, audio, video and documents.
-
Easy to use: Immersive Explorer has an intuitive and user-friendly interface that is easy to navigate and customize.
-
Encryption: You can secure your files with encryption using AES-256 or other algorithms.
-
File Sharing: You can share your files with other users via email, cloud services or network drives.
-
File Tagging: You can tag your files for easy access and organization using keywords, colors or icons.
-
Management: You can view, organize and manage your files using various tools such as copy, move, delete, rename, sort, filter and more.
-
Merging: You can merge multiple files into one file using various formats such as PDF, MP3 or MP4.
-
Preview: You can preview your files in a variety of formats such as images, audio, video, documents and more.
-
Renaming: You can rename your files quickly and easily using various methods such as batch rename, regex rename or auto rename.
-
Repair: You can repair corrupt or damaged files using various tools such as CHKDSK, SFC or DISM.
-
Search: You can quickly find your files with powerful search capabilities such as full-text search, advanced search or search by attributes.
-
Splitting: You can split large files into smaller pieces using various methods such as size, number or custom.
-
Synchronization: You can synchronize your files across devices using various methods such as cloud sync, network sync or local sync.
-
Versioning: You can maintain multiple versions of your files using various methods such as backup, restore or history.
-
-
Immersive Explorer is compatible with Windows 11 and Windows 10. It is also compatible with Windows 8 and Windows 7. It comes in both 32-bit and 64-bit versions. It does not require installation and does not modify the registry. It is a freemium software that requires you to pay a license fee to unlock additional features that are not accessible with the free version. However, the free version still offers a lot of functionality and customization options.
-
If you want to download Immersive Explorer, you can visit its official website[^1^] or its YouTube channel[^2^]. You can also find more information about Immersive Explorer on its Trello board[^3^] or its SoundCloud page[^4^]. Immersive Explorer is a great file manager that can give you a new and exciting way to explore your PC using the metro style from Windows 8. Try it out today and see for yourself!
e753bf7129
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Initial D Fifth Stage Episode 15 Vostfr le dnouement de la bataille des Project D.md b/spaces/tialenAdioni/chat-gpt-api/logs/Initial D Fifth Stage Episode 15 Vostfr le dnouement de la bataille des Project D.md
deleted file mode 100644
index c3dce3ed1b3ba9743c56ca214e6b3ea1bd4883a6..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Initial D Fifth Stage Episode 15 Vostfr le dnouement de la bataille des Project D.md
+++ /dev/null
@@ -1,61 +0,0 @@
-
-
initial d 5th stage ep 15 french subtitles
-watch initial d fifth stage episode 15 online vostfr
-initial d season 5 episode 15 streaming vostfr
-initial d fifth stage episode 15 vostfr download
-initial d 5th stage episode 15 vostfr hd
-initial d fifth stage episode 15 vostfr youtube
-initial d season 5 episode 15 vostfr review
-initial d fifth stage episode 15 vostfr reaction
-initial d 5th stage episode 15 vostfr full
-initial d fifth stage episode 15 vostfr free
-initial d season 5 episode 15 vostfr recap
-initial d fifth stage episode 15 vostfr eng sub
-initial d 5th stage episode 15 vostfr dailymotion
-initial d fifth stage episode 15 vostfr anime
-initial d season 5 episode 15 vostfr wiki
-initial d fifth stage episode 15 vostfr reddit
-initial d 5th stage episode 15 vostfr facebook
-initial d fifth stage episode 15 vostfr kissanime
-initial d season 5 episode 15 vostfr summary
-initial d fifth stage episode 15 vostfr crunchyroll
-initial d 5th stage episode 15 vostfr gogoanime
-initial d fifth stage episode 15 vostfr funimation
-initial d season 5 episode 15 vostfr spoilers
-initial d fifth stage episode 15 vostfr trailer
-initial d 5th stage episode 15 vostfr netflix
-initial d fifth stage episode 15 vostfr hulu
-initial d season 5 episode 15 vostfr release date
-initial d fifth stage episode 15 vostfr soundtrack
-initial d 5th stage episode 15 vostfr ost
-initial d fifth stage episode 15 vostfr manga
-initial d season 5 episode 15 vostfr characters
-initial d fifth stage episode 15 vostfr cast
-initial d 5th stage episode 15 vostfr voice actors
-initial d fifth stage episode 15 vostfr quotes
-initial d season 5 episode 15 vostfr memes
-initial d fifth stage episode 15 vostfr analysis
-initial d 5th stage episode 15 vostfr discussion
-initial d fifth stage episode 15 vostfr rating
-initial d season 5 episode 15 vostfr score
-initial d fifth stage episode 15 vostfr imdb
-initial d 5th stage episode 15 vostfr mal
-initial d fifth stage episode 15 vostfr anilist
-initial d season 5 episode 15 vostfr live action
-initial d fifth stage episode 15 vostfr game
-initial d 5th stage episode 15 vostfr ps4
-initial d fifth stage episode 15 vostfr switch
-initial d season 5 episode 15 vostfr pc
-initial d fifth stage episode 15 vostfr wallpaper
-initial d
e753bf7129
-
-
\ No newline at end of file
diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Bad 2 Bad Apocalypse Premium Mod APK - Unlimited Money and Ammo.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Bad 2 Bad Apocalypse Premium Mod APK - Unlimited Money and Ammo.md
deleted file mode 100644
index ce88afb1bee240c788b843d503a7910b022bd688..0000000000000000000000000000000000000000
--- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Bad 2 Bad Apocalypse Premium Mod APK - Unlimited Money and Ammo.md
+++ /dev/null
@@ -1,97 +0,0 @@
-
-
Bad 2 Bad: Apocalypse Premium Mod APK - A Review
-
If you are a fan of action-packed shooting games with cute animal characters, you might have heard of Bad 2 Bad: Apocalypse. This is a popular game that has received millions of downloads and positive reviews from players around the world. But what if you want to enjoy the game with more features and advantages? That's where Bad 2 Bad: Apocalypse Premium Mod APK comes in. In this article, we will review this modded version of the game and tell you how to download and install it on your device.
Bad 2 Bad: Apocalypse is a sequel to the previous games Bad to Bad: Delta and Extinction. It is a third-person shooter game that follows the adventures of the Delta Team, a group of elite animal soldiers who fight against the evil forces of the Tailless Legion. The game has a rich and immersive story, as well as a variety of gameplay modes and challenges.
-
The story and gameplay of Bad 2 Bad: Apocalypse
-
The story of Bad 2 Bad: Apocalypse takes place in a post-apocalyptic world where humans have disappeared and animals have taken over. The Tailless Legion, led by Major Pan, is a terrorist organization that seeks to destroy all life on Earth. The Delta Team, led by Captain Bae, is a special force that opposes them and tries to save the world. The game has over 100 missions that span different regions and scenarios, such as deserts, jungles, cities, and underground bases. You can choose from over 20 characters, each with their own skills and weapons, and customize them with various items and upgrades. You can also recruit allies and pets to help you in battle.
-
The features and graphics of Bad 2 Bad: Apocalypse
-
Bad 2 Bad: Apocalypse has many features that make it an enjoyable and addictive game. Some of them are:
-
-
Realistic physics and ragdoll effects: The game uses realistic physics and ragdoll effects to create dynamic and satisfying combat scenes. You can see your enemies fly, bounce, roll, and explode as you shoot them.
-
Diverse enemies and bosses: The game has over 60 types of enemies and bosses, each with their own behaviors and patterns. You will face zombies, mutants, robots, aliens, and more.
-
Multiple game modes: The game has several game modes that offer different challenges and rewards. You can play the story mode, the survival mode, the defense mode, the raid mode, the boss rush mode, and more.
-
Stunning graphics and sound effects: The game has high-quality graphics and sound effects that create an immersive atmosphere. You can see the details of the environments, the characters, and the weapons. You can also hear the realistic sounds of gunfire, explosions, screams, and music.
-
-
The game also has a simple and intuitive control system that allows you to move, aim, shoot, reload, switch weapons, use skills, and interact with objects easily.
-
What is Bad 2 Bad: Apocalypse Premium Mod APK?
-
Bad 2 Bad: Apocalypse Premium Mod APK is a modified version of the original game that gives you some extra benefits and advantages. It is not an official version of the game, but a fan-made one that you can download and install from third-party sources. Some of the benefits and advantages of using Bad 2 Bad: Apocalypse Premium Mod APK are:
-
-
Unlimited money and gems: With this mod, you will have unlimited money and gems that you can use to buy and upgrade weapons, items, skills, and characters. You will not have to worry about running out of resources or spending real money on the game.
-
All characters unlocked: With this mod, you will have access to all the characters in the game, including the premium ones that require real money or special conditions to unlock. You will be able to choose any character you want and enjoy their unique abilities and weapons.
-
No ads: With this mod, you will not see any ads in the game. You will be able to play the game without any interruptions or distractions.
-
-
However, there are also some drawbacks of using Bad 2 Bad: Apocalypse Premium Mod APK. Some of them are:
-
bad 2 bad apocalypse mod apk unlimited money
-bad 2 bad apocalypse mod apk download latest version
-bad 2 bad apocalypse mod apk unlimited ammo
-bad 2 bad apocalypse mod apk happymod
-bad 2 bad apocalypse mod apk android 1
-bad 2 bad apocalypse mod apk revdl
-bad 2 bad apocalypse mod apk rexdl
-bad 2 bad apocalypse mod apk free shopping
-bad 2 bad apocalypse mod apk offline
-bad 2 bad apocalypse mod apk no ads
-bad 2 bad apocalypse mod apk obb
-bad 2 bad apocalypse mod apk data
-bad 2 bad apocalypse mod apk hack
-bad 2 bad apocalypse mod apk cheat
-bad 2 bad apocalypse mod apk latest update
-bad 2 bad apocalypse mod apk new version
-bad 2 bad apocalypse mod apk full version
-bad 2 bad apocalypse mod apk premium unlocked
-bad 2 bad apocalypse mod apk all characters unlocked
-bad 2 bad apocalypse mod apk all weapons unlocked
-how to install bad 2 bad apocalypse mod apk
-how to play bad 2 bad apocalypse mod apk
-how to download bad 2 bad apocalypse mod apk
-how to update bad 2 bad apocalypse mod apk
-how to get unlimited money in bad 2 bad apocalypse mod apk
-how to get unlimited ammo in bad 2 bad apocalypse mod apk
-how to get free shopping in bad 2 bad apocalypse mod apk
-how to get no ads in bad 2 bad apocalypse mod apk
-how to unlock all characters in bad 2 bad apocalypse mod apk
-how to unlock all weapons in bad 2 bad apocalypse mod apk
-what is the best character in bad 2 bad apocalypse mod apk
-what is the best weapon in bad 2 bad apocalypse mod apk
-what is the best strategy in bad 2 bad apocalypse mod apk
-what is the best mission in bad 2 bad apocalypse mod apk
-what is the best mode in bad 2 bad apocalypse mod apk
-what is the story of bad 2 bad apocalypse mod apk
-what is the difference between delta and extinction in
-
-
Potential risks of malware and viruses: Since this mod is not an official version of the game, it may contain malware or viruses that can harm your device or steal your personal information. You should always download and install the mod from trusted and reliable sources, and scan it with an antivirus before using it.
-
Possible compatibility issues and bugs: Since this mod is not an official version of the game, it may not be compatible with the latest updates or features of the game. It may also have some bugs or glitches that can affect the gameplay or performance of the game. You should always backup your data before using the mod, and report any issues or problems to the developers of the mod.
-
Lack of support and updates: Since this mod is not an official version of the game, it may not receive regular support or updates from the developers of the game. It may become outdated or obsolete over time, and you may miss out on some new content or improvements of the game.
-
-
How to download and install Bad 2 Bad: Apocalypse Premium Mod APK?
-
If you want to try Bad 2 Bad: Apocalypse Premium Mod APK, you will need to download and install it on your device. Here are the steps to do so:
-
The steps to download and install Bad 2 Bad: Apocalypse Premium Mod APK
-
-
Download the mod file: You can download the mod file from various websites that offer it, such as [APKPure], [APKHome], or [ModDroid]. Make sure you download the latest version of the mod that matches your device's specifications and requirements.
-
Enable unknown sources: Before you can install the mod file, you will need to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device's settings, then security, then unknown sources, and toggle it on.
-
Install the mod file: After you have downloaded and enabled unknown sources, you can install the mod file. Locate the file in your device's storage, tap on it, and follow the instructions on the screen. Wait for the installation process to finish.
-
Launch the game: After you have installed the mod file, you can launch the game from your device's app drawer or home screen. Enjoy playing Bad 2 Bad: Apocalypse Premium Mod APK.
-
-
The precautions to take before downloading and installing Bad 2 Bad: Apocalypse Premium Mod APK
-
As mentioned earlier, there are some risks and drawbacks of using Bad 2 Bad: Apocalypse Premium Mod APK. Therefore, you should take some precautions before downloading and installing it on your device. Some of them are:
-
-
Check the source and reviews of the mod file: Before you download the mod file, you should check the source and reviews of it. Make sure you download it from a trusted and reliable website that has positive feedback from other users. Avoid downloading it from shady or suspicious websites that may contain malware or viruses.
-
Scan the mod file with an antivirus: Before you install the mod file, you should scan it with an antivirus app on your device. This will help you detect and remove any harmful or unwanted files or codes from the mod file. You can use any antivirus app that you trust, such as [Avast], [Malwarebytes], or [Kaspersky].
-
Backup your data and uninstall the original game: Before you install the mod file, you should backup your data and uninstall the original game from your device. This will help you avoid any conflicts or errors that may occur between the mod and the original game. You can backup your data using any cloud service or external storage that you prefer, such as [Google Drive], [Dropbox], or [USB]. You can uninstall the original game by going to your device's settings, then apps, then Bad 2 Bad: Apocalypse, and tapping on uninstall.
-
-
Conclusion
-
Bad 2 Bad: Apocalypse Premium Mod APK is a modified version of the original game that gives you some extra benefits and advantages, such as unlimited money and gems, all characters unlocked, and no ads. However, it also has some risks and drawbacks, such as potential malware and viruses, compatibility issues and bugs, and lack of support and updates. Therefore, you should take some precautions before downloading and installing it on your device, such as checking the source and reviews of the mod file, scanning it with an antivirus, and backing up your data and uninstalling the original game.
-
If you are looking for a fun and exciting shooting game with cute animal characters, you can try Bad 2 Bad: Apocalypse Premium Mod APK. However, you should also be aware of the possible consequences and problems that may arise from using it. We hope this article has helped you understand more about this modded version of the game and how to download and install it on your device.
-
FAQs
-
Here are some frequently asked questions about Bad 2 Bad: Apocalypse Premium Mod APK:
-
-
Is Bad 2 Bad: Apocalypse Premium Mod APK safe to use? It depends on where you download it from and how you install it. If you download it from a trusted and reliable website and scan it with an antivirus before installing it, it should be safe to use. However, if you download it from a shady or suspicious website and install it without checking it for malware or viruses, it may not be safe to use.
-
Is Bad 2 Bad: Apocalypse Premium Mod APK legal to use? It depends on your country's laws and regulations regarding modded apps and games. Some countries may allow the use of modded apps and games for personal and non-commercial purposes, while others may prohibit or restrict them. You should check your country's laws and regulations before using Bad 2 Bad: Apocalypse Premium Mod APK.
-
Can I play Bad 2 Bad: Apocalypse Premium Mod APK online with other players? No, you cannot play Bad 2 Bad: Apocalypse Premium Mod APK online with other players. This modded version of the game is not compatible with the online mode of the original game. You can only play it offline with yourself or with your friends on the same device.
-
Can I update Bad 2 Bad: Apocalypse Premium Mod APK? No, you cannot update Bad 2 Bad: Apocalypse Premium Mod APK. This modded version of the game is not supported or updated by the developers of the original game. You can only use the version that you downloaded and installed on your device.
-
Can I restore my data from the original game to Bad 2 Bad: Apocalypse Premium Mod APK? No, you cannot restore your data from the original game to Bad 2 Bad: Apocalypse Premium Mod APK. This modded version of the game is not compatible with the data of the original game. You will have to start from scratch when you play Bad 2 Bad: Apocalypse Premium Mod APK.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Lecrae Rebel Full Album Zip 1.md b/spaces/tioseFevbu/cartoon-converter/scripts/Lecrae Rebel Full Album Zip 1.md
deleted file mode 100644
index 39cd1104e0aea3edfed50d0fc16cd2bd4dbfbad8..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Lecrae Rebel Full Album Zip 1.md
+++ /dev/null
@@ -1,17 +0,0 @@
-
-
Lecrae's Rebel: A Christian Rap Classic
-
Lecrae is one of the most influential and successful Christian rap artists of all time. His third studio album, Rebel, released in 2008, is widely regarded as a masterpiece of the genre. Rebel showcases Lecrae's lyrical skills, theological depth, and musical diversity, as he tackles topics such as sin, grace, identity, and social justice.
Rebel consists of 15 tracks, each with a different theme and style. The album opens with an intro that sets the tone for the rest of the project: Lecrae declares that he is a rebel against the world's system and values, and that he lives for God's glory alone. The intro is followed by some of the most popular songs on the album, such as "Don't Waste Your Life", "Go Hard", and "Identity". These songs challenge listeners to live with purpose, passion, and faith in Christ.
-
The album also features some collaborations with other Christian rap artists, such as Dwayne Tryumf, Tedashii, Da' T.R.U.T.H., J.R., Cam, Trip Lee, Sho Baraka, and Jai. These artists add their own flavor and perspective to the album, creating a rich and diverse musical experience. Some of the highlights of these collaborations are "Desperate", "Fall Back", and "Beautiful Feet".
-
Rebel is not only a musical masterpiece, but also a theological one. Lecrae does not shy away from addressing some of the most difficult and controversial issues of his time, such as racism, abortion, poverty, and persecution. He does so with biblical wisdom, compassion, and courage, calling his listeners to repentance, faithfulness, and action. Some of the songs that deal with these issues are "Truth", "Change", and "The Bride".
-
Rebel is an album that has inspired and impacted millions of people around the world. It has been nominated for several awards, including a Dove Award for Best Rap/Hip-Hop Album in 2009. It has also been praised by critics and fans alike for its quality and relevance. Rebel is an album that every Christian rap fan should listen to and appreciate.
-
-
-
Rebel is not only a musical and theological masterpiece, but also a commercial success. The album debuted at No. 60 on the Billboard 200 albums chart, making it the first Lecrae album to do so. It also reached No. 1 on the Top Gospel Albums chart, and stayed there for several weeks. Rebel sold over 15,000 copies in its first week, and has sold over 200,000 copies to date. Rebel has received positive reviews from critics and fans alike, who praised Lecrae's authenticity, creativity, and boldness.
-
Rebel is an album that deserves to be heard by anyone who loves rap music, regardless of their religious beliefs. Lecrae proves that Christian rap can be relevant, engaging, and powerful, without compromising the message or the quality. Rebel is an album that challenges listeners to think critically, live passionately, and rebel against the status quo. Rebel is an album that glorifies God and edifies His people.
-
-
Rebel is not only a product of Lecrae's musical and theological vision, but also a reflection of his personal journey and experiences. In several interviews, Lecrae has shared the inspiration and motivation behind some of the songs on the album. For example, he said that "Don't Waste Your Life" was inspired by John Piper's book of the same title, which challenged him to live for God's glory and not for worldly pleasures. He also said that "Go Hard" was a response to the criticism he faced from some Christians who accused him of being too worldly or compromising his faith by collaborating with secular artists. He said that he wanted to show that he was not ashamed of the gospel and that he was willing to go hard for Christ in any context.
-
Lecrae also revealed some of the stories and testimonies behind some of the songs on Rebel. He said that "Desperate" was based on a real conversation he had with a friend who was struggling with drug addiction and suicidal thoughts. He said that he wanted to share his friend's pain and hopelessness, but also point him to Christ as the only source of hope and healing. He also said that "The Bride" was inspired by his visit to China, where he witnessed the persecution and suffering of underground Christians. He said that he wanted to honor them and remind them that they were part of the bride of Christ, who would one day be united with Him.