diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/A Guide to Fixing Microsoft Office 2021 Error Code 0-2054 (0) on Your Device.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/A Guide to Fixing Microsoft Office 2021 Error Code 0-2054 (0) on Your Device.md deleted file mode 100644 index aa6bd730dfb38121c5b2f92090be274908134c01..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/A Guide to Fixing Microsoft Office 2021 Error Code 0-2054 (0) on Your Device.md +++ /dev/null @@ -1,22 +0,0 @@ - -

How to Fix Microsoft Office 2021 Error Code 0-2054 (0)

-

Microsoft Office 2021 is the latest version of the popular productivity suite that offers many new features and improvements. However, some users may encounter an error code 0-2054 (0) when trying to install or update Office 2021 on their devices. This error can prevent the installation or update process from completing successfully and cause frustration for the users.

-

Fortunately, there are some possible solutions that can help you fix this error and enjoy Office 2021 without any issues. Here are some of them:

-

microsoft office 2021 error code 0-2054 (0)


Download ===== https://byltly.com/2uKxwh



-
    -
  1. Uninstall any previous versions of Office. Sometimes, the error code 0-2054 (0) can occur if you have an older version of Office installed on your device, such as Office 365 or Office 2019. To avoid any conflicts, you should uninstall any previous versions of Office using the Office uninstall tool or the Control Panel. Make sure to restart your device after uninstalling Office.
  2. -
  3. Disable any firewall, proxy, or antivirus software. Another possible cause of the error code 0-2054 (0) is that some firewall, proxy, or antivirus software may block the installation or update of Office 2021 as a security measure. To avoid this, you should temporarily disable any firewall, proxy, or antivirus software that you have on your device and try to install or update Office 2021 again. Remember to enable them back after you finish the installation or update.
  4. -
  5. Use the Office Deployment Tool. The Office Deployment Tool (ODT) is a tool that allows you to download and install Office 2021 offline using a configuration file. This can help you avoid any network-related issues that may cause the error code 0-2054 (0). To use the ODT, you need to follow these steps:
  6. - -
  7. Contact Microsoft support. If none of the above solutions work for you, you may need to contact Microsoft support for further assistance. You can visit the Microsoft support website and choose the option that best suits your situation. You can also post your question on the Microsoft Community forum and get help from other users who may have faced similar issues.
  8. -
-

We hope that this article has helped you fix the error code 0-2054 (0) for Office 2021 and enjoy its features without any problems. If you have any feedback or suggestions, please let us know in the comments below.

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Como Baixar e Instalar FIFA 22 Verso Crackeada em Portugus.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Como Baixar e Instalar FIFA 22 Verso Crackeada em Portugus.md deleted file mode 100644 index aa671fc0ff2ede24531ac4c28efcb1e424d1fac4..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Como Baixar e Instalar FIFA 22 Verso Crackeada em Portugus.md +++ /dev/null @@ -1,28 +0,0 @@ -
-

How to Download FIFA 22 Cracked Version in Portuguese

-

If you are a fan of soccer games, you might be interested in downloading FIFA 22, the latest installment of the popular franchise from EA Sports. However, if you don't want to pay for the game or you want to play it in Portuguese, you might be looking for a cracked version that bypasses the DRM protection and allows you to change the language settings.

-

fifa 22 download pc crackeado português


Download File --->>> https://byltly.com/2uKzPO



-

In this article, we will show you how to download FIFA 22 cracked version in Portuguese using a reliable torrent site and a simple patch. Follow these steps and enjoy the game for free!

-
    -
  1. Go to Skidrow Reloaded, one of the best torrent sites for cracked games. Search for FIFA 22 and download the torrent file.
  2. -
  3. Open the torrent file with your preferred torrent client, such as uTorrent or BitTorrent. Choose a folder to save the game files and start the download.
  4. -
  5. Once the download is complete, extract the game files using a program like WinRAR or 7-Zip. You will find a folder called FIFA.22-CPY, which contains the cracked version of the game.
  6. -
  7. Run the setup.exe file and follow the installation instructions. Make sure to uncheck any additional software or toolbars that might be offered during the installation.
  8. -
  9. After the installation is done, copy the contents of the CPY folder (which contains the crack) and paste them into the game folder, replacing the original files.
  10. -
  11. To change the language to Portuguese, open the CPY.ini file with a text editor like Notepad++. Find the line that says Language=english and change it to Language=brazilian. Save and close the file.
  12. -
  13. Now you can launch the game from the desktop shortcut or the FIFA22.exe file. Enjoy playing FIFA 22 cracked version in Portuguese!
  14. -
-

Note: This method is for educational purposes only. We do not condone piracy or illegal downloading of games. If you like FIFA 22, please support the developers and buy the game from the official website.

If you are wondering what FIFA 22 has to offer in terms of new features and modes, here are some highlights that you can expect from the game:

-

- -

These are just some of the new features and modes that FIFA 22 has to offer. If you want to learn more about the game, you can visit the official website or watch the official trailer.

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gujarati Kaps Fonts 150 Varity Of Gujarati Fonts Rar Download Free High-Quality Fonts for Windows and Mac.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gujarati Kaps Fonts 150 Varity Of Gujarati Fonts Rar Download Free High-Quality Fonts for Windows and Mac.md deleted file mode 100644 index e935cc048b5f2b86bb98f6a702322405d6213eee..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gujarati Kaps Fonts 150 Varity Of Gujarati Fonts Rar Download Free High-Quality Fonts for Windows and Mac.md +++ /dev/null @@ -1,155 +0,0 @@ -
-

Gujarati Kaps Fonts: A Guide to Download and Use 150+ Stylish Fonts for Photoshop

-

If you are looking for some unique and elegant fonts for your Gujarati designs, you might want to check out the Gujarati Kaps fonts. These are a collection of 150+ stylish fonts that are specially designed for Photoshop and other graphic design software. In this article, we will show you what are Gujarati Kaps fonts, how to download them, and how to use them in Photoshop. Let's get started!

-

Gujarati Kaps Fonts 150 Varity Of Gujarati Fonts Rar


DOWNLOAD >> https://byltly.com/2uKvRk



-

What are Gujarati Kaps Fonts?

-

Gujarati Kaps fonts are a type of Gujarati fonts that have a distinctive style and flair. They are created by Kapilbhai Dave, a professional graphic designer and font creator from Gujarat. He has been making fonts since 1998 and has developed over 5000 fonts in various languages.

-

The origin and features of Kaps fonts

-

Kapilbhai Dave started making fonts as a hobby when he was studying at the National Institute of Design in Ahmedabad. He was inspired by the calligraphy and typography of different cultures and regions. He wanted to create fonts that would reflect the beauty and diversity of Gujarati language and culture.

-

He named his fonts as Kaps, which is derived from his own name. He also added numbers to his fonts, such as Kap 1, Kap 2, Kap 3, etc., to indicate the order of creation. He used various tools and techniques to make his fonts, such as pen, brush, ink, paper, scanner, computer, software, etc.

-

Kapilbhai Dave's fonts have some common features that make them stand out from other Gujarati fonts. Some of these features are:

- -

The benefits and applications of Kaps fonts

-

Kapilbhai Dave's fonts have many benefits and applications for designers and users alike. Some of these benefits are:

- -

Kapilbhai Dave's fonts can be used for various purposes and projects, such as:

- -

How to download Gujarati Kaps Fonts?

-

If you want to use Kapilbhai Dave's fonts in your designs, you need to download them first. There are many websites that offer his fonts for free or for a fee. However, one of the easiest ways to download his fonts is from 4shared.com. This is a file-sharing website that allows you to download files from other users. Here are the steps to download Gujarati Kaps Fonts from 4shared.com:

-

The steps to download the fonts from 4shared.com

-
    -
  1. Go to https://www.free-fonts.com/gujarati-kaps This is a web page that has a link to download 150+ KAP Gujarati Fonts from 4shared.com.
  2. -
  3. Click on the link that says "Download gujarati kaps fonts (150 varity of gujarati fonts).rar from 4shared.com". This will take you to another web page that has the file name "Gujarati KAPS Fonts (150 varity of gujarati fonts).rar".
  4. -
  5. Click on the green button that says "Download". This will start downloading the file to your computer. The file size is about 5 MB.
  6. -
  7. Wait for the download to finish. You can check the progress of the download on your browser or on your download manager.
  8. -
-

The steps to unzip and install the fonts on Windows

-
    -
  1. Locate the file "Gujarati KAPS Fonts (150 varity of gujarati fonts).rar" on your computer. It should be in your Downloads folder or wherever you saved it.
  2. -
  3. Right-click on the file and select "Extract Here" or "Extract All". This will unzip or extract the file into a folder with the same name.
  4. -
  5. Open the folder "Gujarati KAPS Fonts (150 varity of gujarati fonts)". You will see many subfolders with names like "KAP-01", "KAP-02", "KAP-03", etc. Each subfolder contains one or more font files with extensions like ".ttf", ".otf", ".fon", etc.
  6. -
  7. Select all the font files that you want to install. You can use Ctrl+A to select all or Ctrl+click to select multiple files.
  8. -
  9. Right-click on the selected files and select "Install". This will install the fonts on your computer. You may need administrator permission or password to do this.
  10. -
  11. Wait for the installation to finish. You can check if the installation was successful by going to Control Panel > Fonts or by opening any software that uses fonts like Word or Photoshop.
  12. -
-

How to use Gujarati Kaps Fonts in Photoshop?

-

Now that you have downloaded and installed Gujarati Kaps Fonts on your computer, you can use them in Photoshop or any other graphic design software. Here are some steps to use Gujarati Kaps Fonts in Photoshop:

-

The steps to select and apply the fonts in Photoshop

-
    -
  1. Open Photoshop and create a new document or open an existing one.
  2. -
  3. Select the Text tool (T) from the toolbar or press T on your keyboard.
  4. -
  5. Click on the document where you want to add text or select an existing text layer.
  6. -
  7. In the Options bar at the top of your screen, click on the Font drop-down menu. This will show you all the available fonts on your computer.
  8. -
  9. Scroll down until you find the font name that starts with "KAP". You will see many options like "KAP-01", "KAP-02", "KAP-03", etc. These are all different styles of Gujarati Kaps Fonts. You can also type "KAP" in the search box to filter out other fonts.
  10. -
  11. Select the font style Continuing the article: that you like and click on it. You will see a preview of the font on your text.
  12. -
  13. Adjust the font size, color, alignment, and other settings as you wish. You can also use the Character panel (Window > Character) or the Paragraph panel (Window > Paragraph) for more options.
  14. -
  15. Repeat the steps for any other text layers that you want to apply Gujarati Kaps Fonts to.
  16. -
-

The tips and tricks to create stunning designs with Kaps fonts

-

Gujarati Kaps Fonts are versatile and expressive fonts that can help you create stunning designs with Photoshop. Here are some tips and tricks to make the most of them:

-

How to download 150+ KAP Gujarati Fonts for Photoshop[^1^]
-Free Stylish Gujarati Fonts For Photoshop - YouTube[^1^]
-Download Gujarati files - TraDownload[^2^]
-Download kap Fonts - Search Free Fonts[^2^]
-Gujarati Kaps Free Font - Free Fonts search and download[^2^]
-Gujarati kaps fonts (150 varity of gujarati fonts).rar from 4shared.com[^2^]
-Gujarati Kaps Fonts 150 Varity Of Gujarati Fonts Rar Download[^3^]
-Lián Types - The best website for free high-quality Gujarati Kap fonts[^3^]
-Gujarati Kaps Fonts 150 Varity Of Gujarati Fonts Rar | Peatix
-gujarati kaps fonts 150 varity of gujarati fonts rar is a lightweight and easy to use program
-Gujarati KAPS Fonts (150 varity of gujarati fonts).rar Download
-Direct link Gujarati KAPS Fonts (150 varity of gujarati fonts).rar 4shared for all
-Kap 127 to Unicode | Unicode to Kap 127 | Gujarati Font Converter
-Apart from Kap 127 to Unicode conversion, this unique program converts non Unicode fonts into Gujarati Unicode text and vice versa
-34 Professional Gujarati Kaps Fonts to Download - Typograph
-Shree Gujarati 1110 Italic Modular InfoTech - Most popular font for professional printout
-Fonts - 4shared - minifolder with various gujarati fonts and software
-Indica Summit Scrip Gujarati + Hindi Multi Typing Software.rar from 4shared.com
-MultiFont_KBE.zip - a collection of multiple fonts for different languages
-TBIL 4.1.rar - a tool for transliteration and script conversion of Indian languages
-akruti 6.0 indian language typing software for desk top publishing.zip from 4shared.com
-gujarati and clip art font (terafonts).rar - a set of fonts with clip art symbols for gujarati language
-gujarati font aa_post script font.rar - a post script font for gujarati language
-How to install gujarati kaps fonts on windows 10 - tutorial video
-How to use gujarati kaps fonts in kinemaster or picsart pixellab - tutorial video
-How to create wedding invitations, brouchers and pamphlets in gujarati language using kaps fonts
-How to download and use free stylish gujarati fonts for Microsoft Word
-How to convert gujarati kaps fonts to unicode online
-How to type in gujarati using kaps fonts on your smartphone
-How to customize and edit your own kaps fonts using FontForge
-How to design logos and banners using kaps fonts in Adobe Illustrator
-How to make your own clip art symbols for gujarati language using terafonts
-How to write beautiful calligraphy using kaps fonts in Adobe Photoshop
-How to print high-quality documents using shree gujarati 1110 italic modular infotech font
-How to translate text from english to gujarati using tbil 4.1 tool
-How to type in multiple languages using multifont_kbe.zip software
-How to learn gujarati language using indica summit scrip gujarati + hindi multi typing software
-How to create professional desktop publishing projects using akruti 6.0 indian language typing software
-How to make your own post script font for gujarati language using FontLab Studio
-How to share your gujarati kaps fonts with others using 4shared.com
-How to find more free and high-quality gujarati kap fonts on lián types website
-How to compare different types of gujarati kaps fonts using typograph website
-How to write mathematical expressions in gujarati using kaps fonts and LaTeX
-How to create memes and stickers using kaps fonts and clip art symbols
-How to make your own font family using kaps fonts and FontStruct
-How to embed kaps fonts in your website or blog using CSS
-How to create animated gifs and videos using kaps fonts and GIMP
-How to generate QR codes and barcodes using kaps fonts and online tools
-How to create crossword puzzles and word games using kaps fonts and Excel
-How to make your own handwriting font using kaps fonts and Scanahand

- -

Conclusion

-

Gujarati Kaps Fonts are a great choice for anyone who wants to create beautiful and professional designs with Gujarati text. They are easy to download, install, and use in Photoshop or any other graphic design software. They have a wide range of styles and weights that can suit any purpose and mood. They have a smooth and flowing curve that gives them a natural and organic look. They have a balanced and harmonious proportion that makes them easy to read and pleasing to the eye. They have a creative and artistic flair that adds character and personality to the text.

-

If you are interested in using Gujarati Kaps Fonts in your designs, you can follow the steps and tips that we have shared in this article. You can also experiment with different combinations and settings to find your own style and voice. We hope that this article has inspired you to try out Gujarati Kaps Fonts and create stunning designs with them.

-

Do you have any questions or comments about Gujarati Kaps Fonts? Do you have any suggestions or feedback for us? Let us know in the comments below!

-

FAQs

-

Q1: How many Kaps fonts are there in total?

-

A1: According to Kapilbhai Dave's website, there are over 5000 fonts in total, including Gujarati, Hindi, English, Sanskrit, Marathi, Bengali, Tamil, Telugu, Malayalam, Kannada, Punjabi, Oriya, Assamese, Nepali, Tibetan, Arabic, Persian, Urdu, Sindhi, Pashto, Balochi, Kurdish, Hebrew, Greek, Russian, Mongolian, Chinese, Continuing the FAQs: Japanese, Korean, Thai, Lao, Khmer, Vietnamese, Burmese, Sinhala, and more.

-

Q2: Are Kaps fonts free to use?

-

A2: It depends on where you download them from and what you use them for. Some websites offer Kaps fonts for free for personal or non-commercial use only. Others may charge a fee for commercial use or for the full version of the fonts. You should always check the license terms and conditions before downloading and using any font. You should also respect the intellectual property and rights of the font creator.

-

Q3: Can I use Kaps fonts in other software besides Photoshop?

-

A3: Yes, you can use Kaps fonts in any software that supports TrueType, OpenType, or other font formats. However, some software may have different features and options for using fonts than Photoshop. For example, some software may not support ligatures or alternates, or may have different ways of accessing them. You should always check the documentation and help files of your software to learn how to use fonts effectively.

-

Q4: How can I preview the fonts before downloading them?

-

A4: One way to preview the fonts before downloading them is to use online font preview tools. These are websites that allow you to type in some text and see how it looks with different fonts. Some examples of online font preview tools are:

- -

Q5: Where can I find more resources and tutorials on Kaps fonts?

-

A5: If you want to learn more about Kaps fonts and how to use them in your designs, you can check out some of these resources and tutorials:

- -

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Foxit Advanced Pdf Editor 310 Serial Number A Powerful and Easy-to-Use PDF Editor.md b/spaces/1gistliPinn/ChatGPT4/Examples/Foxit Advanced Pdf Editor 310 Serial Number A Powerful and Easy-to-Use PDF Editor.md deleted file mode 100644 index 7f0a23277f8527971830e98ef9225a98fe4ddb03..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Foxit Advanced Pdf Editor 310 Serial Number A Powerful and Easy-to-Use PDF Editor.md +++ /dev/null @@ -1,6 +0,0 @@ -

Foxit Advanced Pdf Editor 310 Serial Number


DOWNLOADhttps://imgfil.com/2uxYce



- - aaccfb2cb3
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Simulator 2 Mod APK Unlimited Money and All Cars Unlocked for Free.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Simulator 2 Mod APK Unlimited Money and All Cars Unlocked for Free.md deleted file mode 100644 index 7fe661721b8414b77bb46613768a133df022b07a..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Simulator 2 Mod APK Unlimited Money and All Cars Unlocked for Free.md +++ /dev/null @@ -1,92 +0,0 @@ -
-

Car Simulator 2 All Cars Unlocked APK: A Realistic and Fun Racing Game

-

If you are a fan of racing games, you might have heard of Car Simulator 2, a popular simulation game that lets you drive various cars in a realistic world. But did you know that you can download Car Simulator 2 all cars unlocked apk and enjoy the game with more features and benefits? In this article, we will tell you what Car Simulator 2 is, why you should download the modded version, and how to do it. Read on to find out more.

-

What is Car Simulator 2?

-

Car Simulator 2 is a simulation game developed by Oppana Games FZC LLC. It is available for Android devices and has over 10 million downloads on Google Play Store. The game has impressive graphics and physics that make you feel like you are driving a real car. You can explore a vast open world with different locations, such as cities, deserts, mountains, and highways. You can also choose from a variety of cars, ranging from sports cars, muscle cars, SUVs, trucks, and more. You can customize your car with different colors, wheels, spoilers, and other accessories.

-

car simulator 2 all cars unlocked apk


Downloadhttps://urlin.us/2uT0Wb



-

The game has different modes that you can play solo or with your friends online. You can participate in races, missions, daily challenges, and events to earn money and gold. You can also join clubs and compete with other players on the leaderboard. The game is fun and addictive, as you can experience realistic driving scenarios, such as traffic, police, accidents, weather, and more.

-

Why download Car Simulator 2 all cars unlocked apk?

-

While Car Simulator 2 is a free game, it has some limitations that might affect your enjoyment. For example, you need to spend money and gold to buy new cars or upgrade your existing ones. You also need to unlock new locations by completing certain tasks or reaching certain levels. Moreover, some cars and locations are only available through in-app purchases that require real money.

-

That is why downloading Car Simulator 2 all cars unlocked apk is a good idea. This is a modded version of the game that gives you unlimited money and gold. You can use them to buy any car or location you want without any restrictions. You can also access all the features and content of the game without spending a dime. This way, you can have more fun and freedom in the game.

-

How to download and install Car Simulator 2 all cars unlocked apk?

-

Downloading and installing Car Simulator 2 all cars unlocked apk is easy and simple. Just follow these steps:

-
    -
  1. Download the apk file from a trusted source. You can use this link to get the latest version of the modded game.
  2. -
  3. Enable unknown sources in your device settings. This will allow you to install apps from sources other than Google Play Store.
  4. -
  5. Install the apk file by tapping on it and following the instructions.
  6. -
  7. Launch the game and enjoy.
  8. -
-

Conclusion

-

Car Simulator 2 is a realistic and fun racing game that lets you drive various cars in a vast open world. You can play different modes, missions, challenges, and events with your friends online. You can also customize your car with different colors, wheels, spoilers, and other accessories.

-

car simulator 2 mod apk unlimited money and gold
-car simulator 2 hack apk download for android
-car simulator 2 latest version mod apk
-car simulator 2 realistic driving game mod apk
-car simulator 2 multiplayer racing game mod apk
-car simulator 2 free download with all cars unlocked
-car simulator 2 apk + obb data file
-car simulator 2 gameplay features and tips
-car simulator 2 best cars to drive and customize
-car simulator 2 how to unlock all locations and missions
-car simulator 2 cheats and tricks for android
-car simulator 2 review and rating by users
-car simulator 2 online mode with friends and strangers
-car simulator 2 offline mode without internet connection
-car simulator 2 new update and patch notes
-car simulator 2 alternatives and similar games
-car simulator 2 system requirements and compatibility
-car simulator 2 bugs and issues fix guide
-car simulator 2 support and contact information
-car simulator 2 mod menu with unlimited resources
-car simulator 2 no ads and in-app purchases
-car simulator 2 premium version with extra benefits
-car simulator 2 how to install and run on pc
-car simulator 2 how to backup and restore data
-car simulator 2 how to play with controller or keyboard
-car simulator 2 how to earn money and gold fast
-car simulator 2 how to upgrade and repair cars
-car simulator 2 how to change camera and view angle
-car simulator 2 how to switch between day and night mode
-car simulator 2 how to use nitro and drift skills
-car simulator 2 how to join and create clubs
-car simulator 2 how to participate in tournaments and events
-car simulator 2 how to rank up and level up
-car simulator 2 how to unlock achievements and rewards
-car simulator 2 how to customize your avatar and profile
-car simulator 2 pros and cons of the game
-car simulator 2 frequently asked questions and answers
-car simulator 2 feedback and suggestions from players
-car simulator 2 fan art and wallpapers download
-car simulator 2 mod apk safe and virus free download link

-

If you want to enjoy the game with more features and benefits, you should download Car Simulator 2 all cars unlocked apk. This This is a modded version of the game that gives you unlimited money and gold. You can use them to buy any car or location you want without any restrictions. You can also access all the features and content of the game without spending a dime. This way, you can have more fun and freedom in the game.

-

FAQs

-

Here are some frequently asked questions about Car Simulator 2 all cars unlocked apk:

- - - - - - - - - - - - - - - - - - - - - - - - - -
QuestionAnswer
Is Car Simulator 2 all cars unlocked apk safe to download and install?Yes, it is safe as long as you download it from a trusted source. However, you should always scan the apk file with an antivirus before installing it.
Will I get banned for using Car Simulator 2 all cars unlocked apk?No, you will not get banned for using the modded version of the game. The game does not have any anti-cheat system or online verification. You can play the game offline or online without any problems.
Can I update Car Simulator 2 all cars unlocked apk?No, you cannot update the modded version of the game. If you want to get the latest updates and features of the game, you will have to download and install the original version from Google Play Store.
Can I play Car Simulator 2 all cars unlocked apk with my friends online?Yes, you can play the game with your friends online. You can join clubs, races, missions, and events with other players who have the same version of the game.
What are the minimum requirements to play Car Simulator 2 all cars unlocked apk?The minimum requirements to play the game are Android 4.4 or higher, 1 GB of RAM, and 300 MB of free storage space.
-

I hope this article has helped you learn more about Car Simulator 2 all cars unlocked apk. If you have any questions or feedback, please leave a comment below. Thank you for reading and happy gaming!

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Go Go by BTS and Join the ARMY - The Biggest Fan Community in the World.md b/spaces/1phancelerku/anime-remove-background/Download Go Go by BTS and Join the ARMY - The Biggest Fan Community in the World.md deleted file mode 100644 index 2c3011aadb6ae446f47639c62cec231f133e032d..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Go Go by BTS and Join the ARMY - The Biggest Fan Community in the World.md +++ /dev/null @@ -1,126 +0,0 @@ - -

Download Go Go by BTS: A Guide for ARMYs

-

Are you a fan of BTS, the global sensation and phenomenon in the music industry? If so, you probably have heard of their hit song "Go Go", a catchy and upbeat track that showcases their charisma and talent. But have you downloaded it yet? If not, you are missing out on a lot of fun and excitement. In this article, we will tell you everything you need to know about "Go Go" by BTS, and why you should download it right now.

-

download go go by bts


Download Zip > https://jinyurl.com/2uNOEQ



-

What is Go Go by BTS?

-

"Go Go" is a song by BTS, a seven-member South Korean boy band that has taken over the world with their music, message, and style. The song was released on September 18, 2017, as part of their fifth mini album "Love Yourself: Her". It is the eighth track on the album, and also appears as the fourth track on their second compilation album "Love Yourself: Answer".

-

The song is a fusion of trap, hip hop, and EDM genres, with a catchy chorus and playful lyrics. The song is about living in the moment and enjoying life without worrying too much about the future or money. The song also reflects the youth culture and attitude of BTS and their fans, who are often called ARMYs.

-

Why you should download Go Go by BTS?

-

There are many reasons why you should download "Go Go" by BTS. Here are some of them:

-

How to support BTS by downloading Go Go?

-

One of the best ways to support BTS is by downloading their songs legally and ethically. By doing so, you are showing your appreciation and respect for their hard work and creativity. You are also helping them achieve more recognition and success in the music industry. Downloading their songs also contributes to their chart rankings, awards nominations, and sales records.

-

There are many platforms and methods to download "Go Go" by BTS legally and ethically. Some of them are:

- -

How to enjoy Go Go by BTS?

-

Another reason why you should download "Go Go" by BTS is because it is a fun and enjoyable song that will make you happy and energetic. There are many ways to listen to and appreciate the song, such as:

- -

How to join the ARMY fandom with Go Go?

-

A third reason why you should download "Go Go" by BTS is because it will help you connect with other fans of the song and BTS, who are known as ARMYs. ARMYs are one of the most loyal and passionate fandoms in the world, who support and love BTS unconditionally. There are many communities and activities to join the ARMY fandom with "Go Go", such as:

- -

Where to download Go Go by BTS?

-

Now that you know why you should download "Go Go" by BTS, you might be wondering where to download it from. There are many sources and sites to download the song, but not all of them are reliable or convenient. To help you choose the best option for you, we have prepared a comparison table of the best sources and sites to download "Go Go" by BTS, based on quality, price, and convenience.

-

download go go by bts mp3
-download go go by bts lyrics
-download go go by bts video
-download go go by bts live performance
-download go go by bts dance practice
-download go go by bts instrumental
-download go go by bts ringtone
-download go go by bts album
-download go go by bts boomplay
-download go go by bts internet archive
-download go go by bts m countdown
-download go go by bts english version
-download go go by bts remix
-download go go by bts acoustic cover
-download go go by bts karaoke
-download go go by bts reaction
-download go go by bts piano sheet music
-download go go by bts guitar chords
-download go go by bts spotify
-download go go by bts apple music
-download go go by bts soundcloud
-download go go by bts amazon music
-download go go by bts youtube music
-download go go by bts tiktok
-download go go by bts 320kbps
-download go go by bts flac
-download go go by bts wav
-download go go by bts zip file
-download gogo song of BTS
-how to download gogo song of BTS

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Source/SiteQualityPriceConvenience
iTunesHigh$1.29 per songEasy to use, compatible with Apple devices
SpotifyHigh$9.99 per month for premium subscriptionEasy to use, compatible with various devices, offers offline mode
Amazon MusicHigh$0.99 per song or $7.99 per month for unlimited subscriptionEasy to use, compatible with various devices, offers offline mode
YouTube MusicMedium$11.99 per month for premium subscriptionEasy to use, compatible with various devices, offers offline mode and music video access
"Love Yourself: Her" albumHigh$19.99 per album (includes 9 songs)Requires physical purchase or delivery, offers additional content such as photobook and photocard
"Love Yourself: Answer" albumHigh$24.99 per album (includes 26 songs)Requires physical purchase or delivery, offers additional content such as photobook and photocard

Conclusion

In conclusion, "Go Go" by BTS is a great song that you should download right now. It is a fun and upbeat song that will make you happy and energetic. It is also a way to support BTS and their music, enjoy their performance and style, and join their fandom and community. You can download the song from various sources and sites, depending on your preference and budget. So what are you waiting for? Go go go and download "Go Go" by BTS today!

Frequently Asked Questions (FAQs)

Here are some of the most common questions that people have about "Go Go" by BTS:

  1. What does "yolo yolo yolo yo" mean in the chorus of "Go Go"?
  2. This

    This is a repetition of the acronym "YOLO", which stands for "You Only Live Once". It is a popular phrase that expresses the idea of living in the present and enjoying life without regrets. In the context of the song, it means that BTS and their fans are having fun and spending money without worrying about the future or saving up.

    -
  3. What is the meaning of the money gun gesture in the "Go Go" choreography?
  4. -

    This is a gesture that mimics shooting money from a toy gun, which is often used by rappers or celebrities to show off their wealth and status. In the context of the song, it is a sarcastic and ironic gesture that mocks the materialistic and consumerist culture of society. It also shows that BTS and their fans are not obsessed with money or fame, but rather value happiness and freedom.

    -
  5. What are some of the references and parodies in the "Go Go" music video?
  6. -

    There are many references and parodies in the "Go Go" music video, such as:

    - -
  7. What are some of the wordplay and slang in the "Go Go" lyrics?
  8. -

    There are some wordplay and slang in the "Go Go" lyrics, such as:

    - -
  9. What are some of the awards and achievements of "Go Go" by BTS?
  10. -

    "Go Go" by BTS is a very successful and popular song that has won many awards and achievements, such as:

    -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/52Hz/SRMNet_thesis/WT/__init__.py b/spaces/52Hz/SRMNet_thesis/WT/__init__.py deleted file mode 100644 index f1d537fa5e9411f3d44d79ebe06f921e8a7d603f..0000000000000000000000000000000000000000 --- a/spaces/52Hz/SRMNet_thesis/WT/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .transform import * \ No newline at end of file diff --git a/spaces/6Eternal9/ChatGPT4/README.md b/spaces/6Eternal9/ChatGPT4/README.md deleted file mode 100644 index 7938de14e5355209aaae713f289ca469181bbb17..0000000000000000000000000000000000000000 --- a/spaces/6Eternal9/ChatGPT4/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Chat-with-GPT4 -emoji: 🚀 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: ysharma/ChatGPT4 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AIConsultant/MusicGen/tests/data/test_audio_dataset.py b/spaces/AIConsultant/MusicGen/tests/data/test_audio_dataset.py deleted file mode 100644 index b591ea6137f48d0d97fcd1243c5f5d258670a474..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/tests/data/test_audio_dataset.py +++ /dev/null @@ -1,352 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from functools import partial -from itertools import product -import json -import math -import os -import random -import typing as tp - -import pytest -import torch -from torch.utils.data import DataLoader - -from audiocraft.data.audio_dataset import ( - AudioDataset, - AudioMeta, - _get_audio_meta, - load_audio_meta, - save_audio_meta -) -from audiocraft.data.zip import PathInZip - -from ..common_utils import TempDirMixin, get_white_noise, save_wav - - -class TestAudioMeta(TempDirMixin): - - def test_get_audio_meta(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(duration * sample_rate) - wav = get_white_noise(ch, n_frames) - path = self.get_temp_path('sample.wav') - save_wav(path, wav, sample_rate) - m = _get_audio_meta(path, minimal=True) - assert m.path == path, 'path does not match' - assert m.sample_rate == sample_rate, 'sample rate does not match' - assert m.duration == duration, 'duration does not match' - assert m.amplitude is None - assert m.info_path is None - - def test_save_audio_meta(self): - audio_meta = [ - AudioMeta("mypath1", 1., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file1.json')), - AudioMeta("mypath2", 2., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file2.json')) - ] - empty_audio_meta = [] - for idx, meta in enumerate([audio_meta, empty_audio_meta]): - path = self.get_temp_path(f'data_{idx}_save.jsonl') - save_audio_meta(path, meta) - with open(path, 'r') as f: - lines = f.readlines() - read_meta = [AudioMeta.from_dict(json.loads(line)) for line in lines] - assert len(read_meta) == len(meta) - for m, read_m in zip(meta, read_meta): - assert m == read_m - - def test_load_audio_meta(self): - try: - import dora - except ImportError: - dora = None # type: ignore - - audio_meta = [ - AudioMeta("mypath1", 1., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file1.json')), - AudioMeta("mypath2", 2., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file2.json')) - ] - empty_meta = [] - for idx, meta in enumerate([audio_meta, empty_meta]): - path = self.get_temp_path(f'data_{idx}_load.jsonl') - with open(path, 'w') as f: - for m in meta: - json_str = json.dumps(m.to_dict()) + '\n' - f.write(json_str) - read_meta = load_audio_meta(path) - assert len(read_meta) == len(meta) - for m, read_m in zip(meta, read_meta): - if dora: - m.path = dora.git_save.to_absolute_path(m.path) - assert m == read_m, f'original={m}, read={read_m}' - - -class TestAudioDataset(TempDirMixin): - - def _create_audio_files(self, - root_name: str, - num_examples: int, - durations: tp.Union[float, tp.Tuple[float, float]] = (0.1, 1.), - sample_rate: int = 16_000, - channels: int = 1): - root_dir = self.get_temp_dir(root_name) - for i in range(num_examples): - if isinstance(durations, float): - duration = durations - elif isinstance(durations, tuple) and len(durations) == 1: - duration = durations[0] - elif isinstance(durations, tuple) and len(durations) == 2: - duration = random.uniform(durations[0], durations[1]) - else: - assert False - n_frames = int(duration * sample_rate) - wav = get_white_noise(channels, n_frames) - path = os.path.join(root_dir, f'example_{i}.wav') - save_wav(path, wav, sample_rate) - return root_dir - - def _create_audio_dataset(self, - root_name: str, - total_num_examples: int, - durations: tp.Union[float, tp.Tuple[float, float]] = (0.1, 1.), - sample_rate: int = 16_000, - channels: int = 1, - segment_duration: tp.Optional[float] = None, - num_examples: int = 10, - shuffle: bool = True, - return_info: bool = False): - root_dir = self._create_audio_files(root_name, total_num_examples, durations, sample_rate, channels) - dataset = AudioDataset.from_path(root_dir, - minimal_meta=True, - segment_duration=segment_duration, - num_samples=num_examples, - sample_rate=sample_rate, - channels=channels, - shuffle=shuffle, - return_info=return_info) - return dataset - - def test_dataset_full(self): - total_examples = 10 - min_duration, max_duration = 1., 4. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), - sample_rate=sample_rate, channels=channels, segment_duration=None) - assert len(dataset) == total_examples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] <= int(max_duration * sample_rate) - assert sample.shape[1] >= int(min_duration * sample_rate) - - def test_dataset_segment(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples) - assert len(dataset) == num_samples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == int(segment_duration * sample_rate) - - def test_dataset_equal_audio_and_segment_durations(self): - total_examples = 1 - num_samples = 2 - audio_duration = 1. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=audio_duration, sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples) - assert len(dataset) == num_samples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == int(segment_duration * sample_rate) - # the random seek_time adds variability on audio read - sample_1 = dataset[0] - sample_2 = dataset[1] - assert not torch.allclose(sample_1, sample_2) - - def test_dataset_samples(self): - total_examples = 1 - num_samples = 2 - audio_duration = 1. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - - create_dataset = partial( - self._create_audio_dataset, - 'dset', total_examples, durations=audio_duration, sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, - ) - - dataset = create_dataset(shuffle=True) - # when shuffle = True, we have different inputs for the same index across epoch - sample_1 = dataset[0] - sample_2 = dataset[0] - assert not torch.allclose(sample_1, sample_2) - - dataset_noshuffle = create_dataset(shuffle=False) - # when shuffle = False, we have same inputs for the same index across epoch - sample_1 = dataset_noshuffle[0] - sample_2 = dataset_noshuffle[0] - assert torch.allclose(sample_1, sample_2) - - def test_dataset_return_info(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True) - assert len(dataset) == num_samples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample, segment_info = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == int(segment_duration * sample_rate) - assert segment_info.sample_rate == sample_rate - assert segment_info.total_frames == int(segment_duration * sample_rate) - assert segment_info.n_frames <= int(segment_duration * sample_rate) - assert segment_info.seek_time >= 0 - - def test_dataset_return_info_no_segment_duration(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = None - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True) - assert len(dataset) == total_examples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample, segment_info = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == segment_info.total_frames - assert segment_info.sample_rate == sample_rate - assert segment_info.n_frames <= segment_info.total_frames - - def test_dataset_collate_fn(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=False) - batch_size = 4 - dataloader = DataLoader( - dataset, - batch_size=batch_size, - num_workers=0 - ) - for idx, batch in enumerate(dataloader): - assert batch.shape[0] == batch_size - - @pytest.mark.parametrize("segment_duration", [1.0, None]) - def test_dataset_with_meta_collate_fn(self, segment_duration): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True) - batch_size = 4 - dataloader = DataLoader( - dataset, - batch_size=batch_size, - collate_fn=dataset.collater, - num_workers=0 - ) - for idx, batch in enumerate(dataloader): - wav, infos = batch - assert wav.shape[0] == batch_size - assert len(infos) == batch_size - - @pytest.mark.parametrize("segment_duration,sample_on_weight,sample_on_duration,a_hist,b_hist,c_hist", [ - [1, True, True, 0.5, 0.5, 0.0], - [1, False, True, 0.25, 0.5, 0.25], - [1, True, False, 0.666, 0.333, 0.0], - [1, False, False, 0.333, 0.333, 0.333], - [None, False, False, 0.333, 0.333, 0.333]]) - def test_sample_with_weight(self, segment_duration, sample_on_weight, sample_on_duration, a_hist, b_hist, c_hist): - random.seed(1234) - rng = torch.Generator() - rng.manual_seed(1234) - - def _get_histogram(dataset, repetitions=20_000): - counts = {file_meta.path: 0. for file_meta in meta} - for _ in range(repetitions): - file_meta = dataset.sample_file(0, rng) - counts[file_meta.path] += 1 - return {name: count / repetitions for name, count in counts.items()} - - meta = [ - AudioMeta(path='a', duration=5, sample_rate=1, weight=2), - AudioMeta(path='b', duration=10, sample_rate=1, weight=None), - AudioMeta(path='c', duration=5, sample_rate=1, weight=0), - ] - dataset = AudioDataset( - meta, segment_duration=segment_duration, sample_on_weight=sample_on_weight, - sample_on_duration=sample_on_duration) - hist = _get_histogram(dataset) - assert math.isclose(hist['a'], a_hist, abs_tol=0.01) - assert math.isclose(hist['b'], b_hist, abs_tol=0.01) - assert math.isclose(hist['c'], c_hist, abs_tol=0.01) - - def test_meta_duration_filter_all(self): - meta = [ - AudioMeta(path='a', duration=5, sample_rate=1, weight=2), - AudioMeta(path='b', duration=10, sample_rate=1, weight=None), - AudioMeta(path='c', duration=5, sample_rate=1, weight=0), - ] - try: - AudioDataset(meta, segment_duration=11, min_segment_ratio=1) - assert False - except AssertionError: - assert True - - def test_meta_duration_filter_long(self): - meta = [ - AudioMeta(path='a', duration=5, sample_rate=1, weight=2), - AudioMeta(path='b', duration=10, sample_rate=1, weight=None), - AudioMeta(path='c', duration=5, sample_rate=1, weight=0), - ] - dataset = AudioDataset(meta, segment_duration=None, min_segment_ratio=1, max_audio_duration=7) - assert len(dataset) == 2 diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/ldm.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/ldm.py deleted file mode 100644 index b0392e28404c315e5d8ca5ede571da386f5d4b42..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/ldm.py +++ /dev/null @@ -1,715 +0,0 @@ -import os - -import torch -import numpy as np -from tqdm import tqdm -from audioldm.utils import default, instantiate_from_config, save_wave -from audioldm.latent_diffusion.ddpm import DDPM -from audioldm.variational_autoencoder.distributions import DiagonalGaussianDistribution -from audioldm.latent_diffusion.util import noise_like -from audioldm.latent_diffusion.ddim import DDIMSampler -import os - -def disabled_train(self, mode=True): - """Overwrite model.train with this function to make sure train/eval mode - does not change anymore.""" - return self - -class LatentDiffusion(DDPM): - """main class""" - - def __init__( - self, - device="cuda", - first_stage_config=None, - cond_stage_config=None, - num_timesteps_cond=None, - cond_stage_key="image", - cond_stage_trainable=False, - concat_mode=True, - cond_stage_forward=None, - conditioning_key=None, - scale_factor=1.0, - scale_by_std=False, - base_learning_rate=None, - *args, - **kwargs, - ): - self.device = device - self.learning_rate = base_learning_rate - self.num_timesteps_cond = default(num_timesteps_cond, 1) - self.scale_by_std = scale_by_std - assert self.num_timesteps_cond <= kwargs["timesteps"] - # for backwards compatibility after implementation of DiffusionWrapper - if conditioning_key is None: - conditioning_key = "concat" if concat_mode else "crossattn" - if cond_stage_config == "__is_unconditional__": - conditioning_key = None - ckpt_path = kwargs.pop("ckpt_path", None) - ignore_keys = kwargs.pop("ignore_keys", []) - super().__init__(conditioning_key=conditioning_key, *args, **kwargs) - self.concat_mode = concat_mode - self.cond_stage_trainable = cond_stage_trainable - self.cond_stage_key = cond_stage_key - self.cond_stage_key_orig = cond_stage_key - try: - self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1 - except: - self.num_downs = 0 - if not scale_by_std: - self.scale_factor = scale_factor - else: - self.register_buffer("scale_factor", torch.tensor(scale_factor)) - self.instantiate_first_stage(first_stage_config) - self.instantiate_cond_stage(cond_stage_config) - self.cond_stage_forward = cond_stage_forward - self.clip_denoised = False - - def make_cond_schedule( - self, - ): - self.cond_ids = torch.full( - size=(self.num_timesteps,), - fill_value=self.num_timesteps - 1, - dtype=torch.long, - ) - ids = torch.round( - torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond) - ).long() - self.cond_ids[: self.num_timesteps_cond] = ids - - def register_schedule( - self, - given_betas=None, - beta_schedule="linear", - timesteps=1000, - linear_start=1e-4, - linear_end=2e-2, - cosine_s=8e-3, - ): - super().register_schedule( - given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s - ) - - self.shorten_cond_schedule = self.num_timesteps_cond > 1 - if self.shorten_cond_schedule: - self.make_cond_schedule() - - def instantiate_first_stage(self, config): - model = instantiate_from_config(config) - self.first_stage_model = model.eval() - self.first_stage_model.train = disabled_train - for param in self.first_stage_model.parameters(): - param.requires_grad = False - - def instantiate_cond_stage(self, config): - if not self.cond_stage_trainable: - if config == "__is_first_stage__": - print("Using first stage also as cond stage.") - self.cond_stage_model = self.first_stage_model - elif config == "__is_unconditional__": - print(f"Training {self.__class__.__name__} as an unconditional model.") - self.cond_stage_model = None - # self.be_unconditional = True - else: - model = instantiate_from_config(config) - self.cond_stage_model = model.eval() - self.cond_stage_model.train = disabled_train - for param in self.cond_stage_model.parameters(): - param.requires_grad = False - else: - assert config != "__is_first_stage__" - assert config != "__is_unconditional__" - model = instantiate_from_config(config) - self.cond_stage_model = model - self.cond_stage_model = self.cond_stage_model.to(self.device) - - def get_first_stage_encoding(self, encoder_posterior): - if isinstance(encoder_posterior, DiagonalGaussianDistribution): - z = encoder_posterior.sample() - elif isinstance(encoder_posterior, torch.Tensor): - z = encoder_posterior - else: - raise NotImplementedError( - f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented" - ) - return self.scale_factor * z - - def get_learned_conditioning(self, c): - if self.cond_stage_forward is None: - if hasattr(self.cond_stage_model, "encode") and callable( - self.cond_stage_model.encode - ): - c = self.cond_stage_model.encode(c) - if isinstance(c, DiagonalGaussianDistribution): - c = c.mode() - else: - if len(c) == 1: - c = self.cond_stage_model([c[0], c[0]]) - c = c[0:1] - else: - c = self.cond_stage_model(c) - else: - assert hasattr(self.cond_stage_model, self.cond_stage_forward) - c = getattr(self.cond_stage_model, self.cond_stage_forward)(c) - return c - - @torch.no_grad() - def get_input( - self, - batch, - k, - return_first_stage_encode=True, - return_first_stage_outputs=False, - force_c_encode=False, - cond_key=None, - return_original_cond=False, - bs=None, - ): - x = super().get_input(batch, k) - - if bs is not None: - x = x[:bs] - - x = x.to(self.device) - - if return_first_stage_encode: - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - else: - z = None - - if self.model.conditioning_key is not None: - if cond_key is None: - cond_key = self.cond_stage_key - if cond_key != self.first_stage_key: - if cond_key in ["caption", "coordinates_bbox"]: - xc = batch[cond_key] - elif cond_key == "class_label": - xc = batch - else: - # [bs, 1, 527] - xc = super().get_input(batch, cond_key) - if type(xc) == torch.Tensor: - xc = xc.to(self.device) - else: - xc = x - if not self.cond_stage_trainable or force_c_encode: - if isinstance(xc, dict) or isinstance(xc, list): - c = self.get_learned_conditioning(xc) - else: - c = self.get_learned_conditioning(xc.to(self.device)) - else: - c = xc - - if bs is not None: - c = c[:bs] - - else: - c = None - xc = None - if self.use_positional_encodings: - pos_x, pos_y = self.compute_latent_shifts(batch) - c = {"pos_x": pos_x, "pos_y": pos_y} - out = [z, c] - if return_first_stage_outputs: - xrec = self.decode_first_stage(z) - out.extend([x, xrec]) - if return_original_cond: - out.append(xc) - return out - - @torch.no_grad() - def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False): - if predict_cids: - if z.dim() == 4: - z = torch.argmax(z.exp(), dim=1).long() - z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None) - z = rearrange(z, "b h w c -> b c h w").contiguous() - - z = 1.0 / self.scale_factor * z - return self.first_stage_model.decode(z) - - def mel_spectrogram_to_waveform(self, mel): - # Mel: [bs, 1, t-steps, fbins] - if len(mel.size()) == 4: - mel = mel.squeeze(1) - mel = mel.permute(0, 2, 1) - waveform = self.first_stage_model.vocoder(mel) - waveform = waveform.cpu().detach().numpy() - return waveform - - @torch.no_grad() - def encode_first_stage(self, x): - return self.first_stage_model.encode(x) - - def apply_model(self, x_noisy, t, cond, return_ids=False): - - if isinstance(cond, dict): - # hybrid case, cond is exptected to be a dict - pass - else: - if not isinstance(cond, list): - cond = [cond] - if self.model.conditioning_key == "concat": - key = "c_concat" - elif self.model.conditioning_key == "crossattn": - key = "c_crossattn" - else: - key = "c_film" - - cond = {key: cond} - - x_recon = self.model(x_noisy, t, **cond) - - if isinstance(x_recon, tuple) and not return_ids: - return x_recon[0] - else: - return x_recon - - def p_mean_variance( - self, - x, - c, - t, - clip_denoised: bool, - return_codebook_ids=False, - quantize_denoised=False, - return_x0=False, - score_corrector=None, - corrector_kwargs=None, - ): - t_in = t - model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids) - - if score_corrector is not None: - assert self.parameterization == "eps" - model_out = score_corrector.modify_score( - self, model_out, x, t, c, **corrector_kwargs - ) - - if return_codebook_ids: - model_out, logits = model_out - - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - else: - raise NotImplementedError() - - if clip_denoised: - x_recon.clamp_(-1.0, 1.0) - if quantize_denoised: - x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon) - model_mean, posterior_variance, posterior_log_variance = self.q_posterior( - x_start=x_recon, x_t=x, t=t - ) - if return_codebook_ids: - return model_mean, posterior_variance, posterior_log_variance, logits - elif return_x0: - return model_mean, posterior_variance, posterior_log_variance, x_recon - else: - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample( - self, - x, - c, - t, - clip_denoised=False, - repeat_noise=False, - return_codebook_ids=False, - quantize_denoised=False, - return_x0=False, - temperature=1.0, - noise_dropout=0.0, - score_corrector=None, - corrector_kwargs=None, - ): - b, *_, device = *x.shape, x.device - outputs = self.p_mean_variance( - x=x, - c=c, - t=t, - clip_denoised=clip_denoised, - return_codebook_ids=return_codebook_ids, - quantize_denoised=quantize_denoised, - return_x0=return_x0, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - ) - if return_codebook_ids: - raise DeprecationWarning("Support dropped.") - model_mean, _, model_log_variance, logits = outputs - elif return_x0: - model_mean, _, model_log_variance, x0 = outputs - else: - model_mean, _, model_log_variance = outputs - - noise = noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.0: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - # no noise when t == 0 - nonzero_mask = ( - (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))).contiguous() - ) - - if return_codebook_ids: - return model_mean + nonzero_mask * ( - 0.5 * model_log_variance - ).exp() * noise, logits.argmax(dim=1) - if return_x0: - return ( - model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, - x0, - ) - else: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def progressive_denoising( - self, - cond, - shape, - verbose=True, - callback=None, - quantize_denoised=False, - img_callback=None, - mask=None, - x0=None, - temperature=1.0, - noise_dropout=0.0, - score_corrector=None, - corrector_kwargs=None, - batch_size=None, - x_T=None, - start_T=None, - log_every_t=None, - ): - if not log_every_t: - log_every_t = self.log_every_t - timesteps = self.num_timesteps - if batch_size is not None: - b = batch_size if batch_size is not None else shape[0] - shape = [batch_size] + list(shape) - else: - b = batch_size = shape[0] - if x_T is None: - img = torch.randn(shape, device=self.device) - else: - img = x_T - intermediates = [] - if cond is not None: - if isinstance(cond, dict): - cond = { - key: cond[key][:batch_size] - if not isinstance(cond[key], list) - else list(map(lambda x: x[:batch_size], cond[key])) - for key in cond - } - else: - cond = ( - [c[:batch_size] for c in cond] - if isinstance(cond, list) - else cond[:batch_size] - ) - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = ( - tqdm( - reversed(range(0, timesteps)), - desc="Progressive Generation", - total=timesteps, - ) - if verbose - else reversed(range(0, timesteps)) - ) - if type(temperature) == float: - temperature = [temperature] * timesteps - - for i in iterator: - ts = torch.full((b,), i, device=self.device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != "hybrid" - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img, x0_partial = self.p_sample( - img, - cond, - ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised, - return_x0=True, - temperature=temperature[i], - noise_dropout=noise_dropout, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - ) - if mask is not None: - assert x0 is not None - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1.0 - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(x0_partial) - if callback: - callback(i) - if img_callback: - img_callback(img, i) - return img, intermediates - - @torch.no_grad() - def p_sample_loop( - self, - cond, - shape, - return_intermediates=False, - x_T=None, - verbose=True, - callback=None, - timesteps=None, - quantize_denoised=False, - mask=None, - x0=None, - img_callback=None, - start_T=None, - log_every_t=None, - ): - - if not log_every_t: - log_every_t = self.log_every_t - device = self.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - intermediates = [img] - if timesteps is None: - timesteps = self.num_timesteps - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = ( - tqdm(reversed(range(0, timesteps)), desc="Sampling t", total=timesteps) - if verbose - else reversed(range(0, timesteps)) - ) - - if mask is not None: - assert x0 is not None - assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match - - for i in iterator: - ts = torch.full((b,), i, device=device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != "hybrid" - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img = self.p_sample( - img, - cond, - ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised, - ) - if mask is not None: - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1.0 - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(img) - if callback: - callback(i) - if img_callback: - img_callback(img, i) - - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample( - self, - cond, - batch_size=16, - return_intermediates=False, - x_T=None, - verbose=True, - timesteps=None, - quantize_denoised=False, - mask=None, - x0=None, - shape=None, - **kwargs, - ): - if shape is None: - shape = (batch_size, self.channels, self.latent_t_size, self.latent_f_size) - if cond is not None: - if isinstance(cond, dict): - cond = { - key: cond[key][:batch_size] - if not isinstance(cond[key], list) - else list(map(lambda x: x[:batch_size], cond[key])) - for key in cond - } - else: - cond = ( - [c[:batch_size] for c in cond] - if isinstance(cond, list) - else cond[:batch_size] - ) - return self.p_sample_loop( - cond, - shape, - return_intermediates=return_intermediates, - x_T=x_T, - verbose=verbose, - timesteps=timesteps, - quantize_denoised=quantize_denoised, - mask=mask, - x0=x0, - **kwargs, - ) - - @torch.no_grad() - def sample_log( - self, - cond, - batch_size, - ddim, - ddim_steps, - unconditional_guidance_scale=1.0, - unconditional_conditioning=None, - use_plms=False, - mask=None, - **kwargs, - ): - - if mask is not None: - shape = (self.channels, mask.size()[-2], mask.size()[-1]) - else: - shape = (self.channels, self.latent_t_size, self.latent_f_size) - - intermediate = None - if ddim and not use_plms: - # print("Use ddim sampler") - - ddim_sampler = DDIMSampler(self) - samples, intermediates = ddim_sampler.sample( - ddim_steps, - batch_size, - shape, - cond, - verbose=False, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - mask=mask, - **kwargs, - ) - - else: - # print("Use DDPM sampler") - samples, intermediates = self.sample( - cond=cond, - batch_size=batch_size, - return_intermediates=True, - unconditional_guidance_scale=unconditional_guidance_scale, - mask=mask, - unconditional_conditioning=unconditional_conditioning, - **kwargs, - ) - - return samples, intermediate - - - @torch.no_grad() - def generate_sample( - self, - batchs, - ddim_steps=200, - ddim_eta=1.0, - x_T=None, - n_candidate_gen_per_text=1, - unconditional_guidance_scale=1.0, - unconditional_conditioning=None, - name="waveform", - use_plms=False, - save=False, - **kwargs, - ): - # Generate n_candidate_gen_per_text times and select the best - # Batch: audio, text, fnames - assert x_T is None - try: - batchs = iter(batchs) - except TypeError: - raise ValueError("The first input argument should be an iterable object") - - if use_plms: - assert ddim_steps is not None - use_ddim = ddim_steps is not None - # waveform_save_path = os.path.join(self.get_log_dir(), name) - # os.makedirs(waveform_save_path, exist_ok=True) - # print("Waveform save path: ", waveform_save_path) - - with self.ema_scope("Generate"): - for batch in batchs: - z, c = self.get_input( - batch, - self.first_stage_key, - return_first_stage_outputs=False, - force_c_encode=True, - return_original_cond=False, - bs=None, - ) - text = super().get_input(batch, "text") - - # Generate multiple samples - batch_size = z.shape[0] * n_candidate_gen_per_text - c = torch.cat([c] * n_candidate_gen_per_text, dim=0) - text = text * n_candidate_gen_per_text - - if unconditional_guidance_scale != 1.0: - unconditional_conditioning = ( - self.cond_stage_model.get_unconditional_condition(batch_size) - ) - - samples, _ = self.sample_log( - cond=c, - batch_size=batch_size, - x_T=x_T, - ddim=use_ddim, - ddim_steps=ddim_steps, - eta=ddim_eta, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - use_plms=use_plms, - ) - - mel = self.decode_first_stage(samples) - - waveform = self.mel_spectrogram_to_waveform(mel) - - if(waveform.shape[0] > 1): - similarity = self.cond_stage_model.cos_similarity( - torch.FloatTensor(waveform).squeeze(1), text - ) - - best_index = [] - for i in range(z.shape[0]): - candidates = similarity[i :: z.shape[0]] - max_index = torch.argmax(candidates).item() - best_index.append(i + max_index * z.shape[0]) - - waveform = waveform[best_index] - # print("Similarity between generated audio and text", similarity) - # print("Choose the following indexes:", best_index) - - return waveform diff --git a/spaces/AIWaves/SOP_Generation-single/Memory/base_Memory.py b/spaces/AIWaves/SOP_Generation-single/Memory/base_Memory.py deleted file mode 100644 index 9312bc0e50f35ac5136d49dff70585c5baaa3a17..0000000000000000000000000000000000000000 --- a/spaces/AIWaves/SOP_Generation-single/Memory/base_Memory.py +++ /dev/null @@ -1,32 +0,0 @@ -from Prompt import * -class Memory: - def __init__(self,role,name,content) -> None: - self.send_role = role - self.send_name = name - self.content = content - - def get_gpt_message(self,role): - return {"role":role,"content":self.content} - - @classmethod - def get_chat_history(self,messages,agent_name =None): - """ - Splice a memory list into a sentence - input : - messages(list) : list of memory(Memory) - Return : - chat_history(str) : One sentence after integration - """ - chat_history = "" - for message in messages: - name,role,content = message.send_name,message.send_role,message.content - if agent_name and agent_name==name: - name = "you" - chat_history += eval(Single_message) - chat_history = eval(Chat_total_message) - return chat_history - - def get_query(self): - "Return : query(str):last sentence" - name,role,content = self.send_name,self.send_role,self.content - return eval(Single_message) \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT/client/css/label.css b/spaces/AchyuthGamer/OpenGPT/client/css/label.css deleted file mode 100644 index d84873d41e41f2cc22f9d3ace67c30ec07706811..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/client/css/label.css +++ /dev/null @@ -1,16 +0,0 @@ -label { - cursor: pointer; - text-indent: -9999px; - width: 50px; - height: 30px; - backdrop-filter: blur(20px); - -webkit-backdrop-filter: blur(20px); - background-color: var(--blur-bg); - border-radius: var(--border-radius-1); - border: 1px solid var(--blur-border); - display: block; - border-radius: 100px; - position: relative; - overflow: hidden; - transition: 0.33s; -} diff --git a/spaces/AchyuthGamer/text-to-speech-client/README.md b/spaces/AchyuthGamer/text-to-speech-client/README.md deleted file mode 100644 index 6b4f8def18dc1d2513eda2f6210eaeff444785c0..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/text-to-speech-client/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Text To Speech Client -emoji: 👀 -colorFrom: red -colorTo: red -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Adapter/T2I-Adapter/ldm/modules/image_degradation/bsrgan_light.py b/spaces/Adapter/T2I-Adapter/ldm/modules/image_degradation/bsrgan_light.py deleted file mode 100644 index 808c7f882cb75e2ba2340d5b55881d11927351f0..0000000000000000000000000000000000000000 --- a/spaces/Adapter/T2I-Adapter/ldm/modules/image_degradation/bsrgan_light.py +++ /dev/null @@ -1,651 +0,0 @@ -# -*- coding: utf-8 -*- -import numpy as np -import cv2 -import torch - -from functools import partial -import random -from scipy import ndimage -import scipy -import scipy.stats as ss -from scipy.interpolate import interp2d -from scipy.linalg import orth -import albumentations - -import ldm.modules.image_degradation.utils_image as util - -""" -# -------------------------------------------- -# Super-Resolution -# -------------------------------------------- -# -# Kai Zhang (cskaizhang@gmail.com) -# https://github.com/cszn -# From 2019/03--2021/08 -# -------------------------------------------- -""" - -def modcrop_np(img, sf): - ''' - Args: - img: numpy image, WxH or WxHxC - sf: scale factor - Return: - cropped image - ''' - w, h = img.shape[:2] - im = np.copy(img) - return im[:w - w % sf, :h - h % sf, ...] - - -""" -# -------------------------------------------- -# anisotropic Gaussian kernels -# -------------------------------------------- -""" - - -def analytic_kernel(k): - """Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)""" - k_size = k.shape[0] - # Calculate the big kernels size - big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2)) - # Loop over the small kernel to fill the big one - for r in range(k_size): - for c in range(k_size): - big_k[2 * r:2 * r + k_size, 2 * c:2 * c + k_size] += k[r, c] * k - # Crop the edges of the big kernel to ignore very small values and increase run time of SR - crop = k_size // 2 - cropped_big_k = big_k[crop:-crop, crop:-crop] - # Normalize to 1 - return cropped_big_k / cropped_big_k.sum() - - -def anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6): - """ generate an anisotropic Gaussian kernel - Args: - ksize : e.g., 15, kernel size - theta : [0, pi], rotation angle range - l1 : [0.1,50], scaling of eigenvalues - l2 : [0.1,l1], scaling of eigenvalues - If l1 = l2, will get an isotropic Gaussian kernel. - Returns: - k : kernel - """ - - v = np.dot(np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]), np.array([1., 0.])) - V = np.array([[v[0], v[1]], [v[1], -v[0]]]) - D = np.array([[l1, 0], [0, l2]]) - Sigma = np.dot(np.dot(V, D), np.linalg.inv(V)) - k = gm_blur_kernel(mean=[0, 0], cov=Sigma, size=ksize) - - return k - - -def gm_blur_kernel(mean, cov, size=15): - center = size / 2.0 + 0.5 - k = np.zeros([size, size]) - for y in range(size): - for x in range(size): - cy = y - center + 1 - cx = x - center + 1 - k[y, x] = ss.multivariate_normal.pdf([cx, cy], mean=mean, cov=cov) - - k = k / np.sum(k) - return k - - -def shift_pixel(x, sf, upper_left=True): - """shift pixel for super-resolution with different scale factors - Args: - x: WxHxC or WxH - sf: scale factor - upper_left: shift direction - """ - h, w = x.shape[:2] - shift = (sf - 1) * 0.5 - xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0) - if upper_left: - x1 = xv + shift - y1 = yv + shift - else: - x1 = xv - shift - y1 = yv - shift - - x1 = np.clip(x1, 0, w - 1) - y1 = np.clip(y1, 0, h - 1) - - if x.ndim == 2: - x = interp2d(xv, yv, x)(x1, y1) - if x.ndim == 3: - for i in range(x.shape[-1]): - x[:, :, i] = interp2d(xv, yv, x[:, :, i])(x1, y1) - - return x - - -def blur(x, k): - ''' - x: image, NxcxHxW - k: kernel, Nx1xhxw - ''' - n, c = x.shape[:2] - p1, p2 = (k.shape[-2] - 1) // 2, (k.shape[-1] - 1) // 2 - x = torch.nn.functional.pad(x, pad=(p1, p2, p1, p2), mode='replicate') - k = k.repeat(1, c, 1, 1) - k = k.view(-1, 1, k.shape[2], k.shape[3]) - x = x.view(1, -1, x.shape[2], x.shape[3]) - x = torch.nn.functional.conv2d(x, k, bias=None, stride=1, padding=0, groups=n * c) - x = x.view(n, c, x.shape[2], x.shape[3]) - - return x - - -def gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0): - """" - # modified version of https://github.com/assafshocher/BlindSR_dataset_generator - # Kai Zhang - # min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var and max_var - # max_var = 2.5 * sf - """ - # Set random eigen-vals (lambdas) and angle (theta) for COV matrix - lambda_1 = min_var + np.random.rand() * (max_var - min_var) - lambda_2 = min_var + np.random.rand() * (max_var - min_var) - theta = np.random.rand() * np.pi # random theta - noise = -noise_level + np.random.rand(*k_size) * noise_level * 2 - - # Set COV matrix using Lambdas and Theta - LAMBDA = np.diag([lambda_1, lambda_2]) - Q = np.array([[np.cos(theta), -np.sin(theta)], - [np.sin(theta), np.cos(theta)]]) - SIGMA = Q @ LAMBDA @ Q.T - INV_SIGMA = np.linalg.inv(SIGMA)[None, None, :, :] - - # Set expectation position (shifting kernel for aligned image) - MU = k_size // 2 - 0.5 * (scale_factor - 1) # - 0.5 * (scale_factor - k_size % 2) - MU = MU[None, None, :, None] - - # Create meshgrid for Gaussian - [X, Y] = np.meshgrid(range(k_size[0]), range(k_size[1])) - Z = np.stack([X, Y], 2)[:, :, :, None] - - # Calcualte Gaussian for every pixel of the kernel - ZZ = Z - MU - ZZ_t = ZZ.transpose(0, 1, 3, 2) - raw_kernel = np.exp(-0.5 * np.squeeze(ZZ_t @ INV_SIGMA @ ZZ)) * (1 + noise) - - # shift the kernel so it will be centered - # raw_kernel_centered = kernel_shift(raw_kernel, scale_factor) - - # Normalize the kernel and return - # kernel = raw_kernel_centered / np.sum(raw_kernel_centered) - kernel = raw_kernel / np.sum(raw_kernel) - return kernel - - -def fspecial_gaussian(hsize, sigma): - hsize = [hsize, hsize] - siz = [(hsize[0] - 1.0) / 2.0, (hsize[1] - 1.0) / 2.0] - std = sigma - [x, y] = np.meshgrid(np.arange(-siz[1], siz[1] + 1), np.arange(-siz[0], siz[0] + 1)) - arg = -(x * x + y * y) / (2 * std * std) - h = np.exp(arg) - h[h < scipy.finfo(float).eps * h.max()] = 0 - sumh = h.sum() - if sumh != 0: - h = h / sumh - return h - - -def fspecial_laplacian(alpha): - alpha = max([0, min([alpha, 1])]) - h1 = alpha / (alpha + 1) - h2 = (1 - alpha) / (alpha + 1) - h = [[h1, h2, h1], [h2, -4 / (alpha + 1), h2], [h1, h2, h1]] - h = np.array(h) - return h - - -def fspecial(filter_type, *args, **kwargs): - ''' - python code from: - https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py - ''' - if filter_type == 'gaussian': - return fspecial_gaussian(*args, **kwargs) - if filter_type == 'laplacian': - return fspecial_laplacian(*args, **kwargs) - - -""" -# -------------------------------------------- -# degradation models -# -------------------------------------------- -""" - - -def bicubic_degradation(x, sf=3): - ''' - Args: - x: HxWxC image, [0, 1] - sf: down-scale factor - Return: - bicubicly downsampled LR image - ''' - x = util.imresize_np(x, scale=1 / sf) - return x - - -def srmd_degradation(x, k, sf=3): - ''' blur + bicubic downsampling - Args: - x: HxWxC image, [0, 1] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - Reference: - @inproceedings{zhang2018learning, - title={Learning a single convolutional super-resolution network for multiple degradations}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={3262--3271}, - year={2018} - } - ''' - x = ndimage.convolve(x, np.expand_dims(k, axis=2), mode='wrap') # 'nearest' | 'mirror' - x = bicubic_degradation(x, sf=sf) - return x - - -def dpsr_degradation(x, k, sf=3): - ''' bicubic downsampling + blur - Args: - x: HxWxC image, [0, 1] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - Reference: - @inproceedings{zhang2019deep, - title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={1671--1681}, - year={2019} - } - ''' - x = bicubic_degradation(x, sf=sf) - x = ndimage.convolve(x, np.expand_dims(k, axis=2), mode='wrap') - return x - - -def classical_degradation(x, k, sf=3): - ''' blur + downsampling - Args: - x: HxWxC image, [0, 1]/[0, 255] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - ''' - x = ndimage.convolve(x, np.expand_dims(k, axis=2), mode='wrap') - # x = filters.correlate(x, np.expand_dims(np.flip(k), axis=2)) - st = 0 - return x[st::sf, st::sf, ...] - - -def add_sharpening(img, weight=0.5, radius=50, threshold=10): - """USM sharpening. borrowed from real-ESRGAN - Input image: I; Blurry image: B. - 1. K = I + weight * (I - B) - 2. Mask = 1 if abs(I - B) > threshold, else: 0 - 3. Blur mask: - 4. Out = Mask * K + (1 - Mask) * I - Args: - img (Numpy array): Input image, HWC, BGR; float32, [0, 1]. - weight (float): Sharp weight. Default: 1. - radius (float): Kernel size of Gaussian blur. Default: 50. - threshold (int): - """ - if radius % 2 == 0: - radius += 1 - blur = cv2.GaussianBlur(img, (radius, radius), 0) - residual = img - blur - mask = np.abs(residual) * 255 > threshold - mask = mask.astype('float32') - soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0) - - K = img + weight * residual - K = np.clip(K, 0, 1) - return soft_mask * K + (1 - soft_mask) * img - - -def add_blur(img, sf=4): - wd2 = 4.0 + sf - wd = 2.0 + 0.2 * sf - - wd2 = wd2/4 - wd = wd/4 - - if random.random() < 0.5: - l1 = wd2 * random.random() - l2 = wd2 * random.random() - k = anisotropic_Gaussian(ksize=random.randint(2, 11) + 3, theta=random.random() * np.pi, l1=l1, l2=l2) - else: - k = fspecial('gaussian', random.randint(2, 4) + 3, wd * random.random()) - img = ndimage.convolve(img, np.expand_dims(k, axis=2), mode='mirror') - - return img - - -def add_resize(img, sf=4): - rnum = np.random.rand() - if rnum > 0.8: # up - sf1 = random.uniform(1, 2) - elif rnum < 0.7: # down - sf1 = random.uniform(0.5 / sf, 1) - else: - sf1 = 1.0 - img = cv2.resize(img, (int(sf1 * img.shape[1]), int(sf1 * img.shape[0])), interpolation=random.choice([1, 2, 3])) - img = np.clip(img, 0.0, 1.0) - - return img - - -# def add_Gaussian_noise(img, noise_level1=2, noise_level2=25): -# noise_level = random.randint(noise_level1, noise_level2) -# rnum = np.random.rand() -# if rnum > 0.6: # add color Gaussian noise -# img += np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) -# elif rnum < 0.4: # add grayscale Gaussian noise -# img += np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) -# else: # add noise -# L = noise_level2 / 255. -# D = np.diag(np.random.rand(3)) -# U = orth(np.random.rand(3, 3)) -# conv = np.dot(np.dot(np.transpose(U), D), U) -# img += np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) -# img = np.clip(img, 0.0, 1.0) -# return img - -def add_Gaussian_noise(img, noise_level1=2, noise_level2=25): - noise_level = random.randint(noise_level1, noise_level2) - rnum = np.random.rand() - if rnum > 0.6: # add color Gaussian noise - img = img + np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) - elif rnum < 0.4: # add grayscale Gaussian noise - img = img + np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) - else: # add noise - L = noise_level2 / 255. - D = np.diag(np.random.rand(3)) - U = orth(np.random.rand(3, 3)) - conv = np.dot(np.dot(np.transpose(U), D), U) - img = img + np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) - img = np.clip(img, 0.0, 1.0) - return img - - -def add_speckle_noise(img, noise_level1=2, noise_level2=25): - noise_level = random.randint(noise_level1, noise_level2) - img = np.clip(img, 0.0, 1.0) - rnum = random.random() - if rnum > 0.6: - img += img * np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) - elif rnum < 0.4: - img += img * np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) - else: - L = noise_level2 / 255. - D = np.diag(np.random.rand(3)) - U = orth(np.random.rand(3, 3)) - conv = np.dot(np.dot(np.transpose(U), D), U) - img += img * np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) - img = np.clip(img, 0.0, 1.0) - return img - - -def add_Poisson_noise(img): - img = np.clip((img * 255.0).round(), 0, 255) / 255. - vals = 10 ** (2 * random.random() + 2.0) # [2, 4] - if random.random() < 0.5: - img = np.random.poisson(img * vals).astype(np.float32) / vals - else: - img_gray = np.dot(img[..., :3], [0.299, 0.587, 0.114]) - img_gray = np.clip((img_gray * 255.0).round(), 0, 255) / 255. - noise_gray = np.random.poisson(img_gray * vals).astype(np.float32) / vals - img_gray - img += noise_gray[:, :, np.newaxis] - img = np.clip(img, 0.0, 1.0) - return img - - -def add_JPEG_noise(img): - quality_factor = random.randint(80, 95) - img = cv2.cvtColor(util.single2uint(img), cv2.COLOR_RGB2BGR) - result, encimg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor]) - img = cv2.imdecode(encimg, 1) - img = cv2.cvtColor(util.uint2single(img), cv2.COLOR_BGR2RGB) - return img - - -def random_crop(lq, hq, sf=4, lq_patchsize=64): - h, w = lq.shape[:2] - rnd_h = random.randint(0, h - lq_patchsize) - rnd_w = random.randint(0, w - lq_patchsize) - lq = lq[rnd_h:rnd_h + lq_patchsize, rnd_w:rnd_w + lq_patchsize, :] - - rnd_h_H, rnd_w_H = int(rnd_h * sf), int(rnd_w * sf) - hq = hq[rnd_h_H:rnd_h_H + lq_patchsize * sf, rnd_w_H:rnd_w_H + lq_patchsize * sf, :] - return lq, hq - - -def degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None): - """ - This is the degradation model of BSRGAN from the paper - "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution" - ---------- - img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf) - sf: scale factor - isp_model: camera ISP model - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25 - sf_ori = sf - - h1, w1 = img.shape[:2] - img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = img.shape[:2] - - if h < lq_patchsize * sf or w < lq_patchsize * sf: - raise ValueError(f'img size ({h1}X{w1}) is too small!') - - hq = img.copy() - - if sf == 4 and random.random() < scale2_prob: # downsample1 - if np.random.rand() < 0.5: - img = cv2.resize(img, (int(1 / 2 * img.shape[1]), int(1 / 2 * img.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - img = util.imresize_np(img, 1 / 2, True) - img = np.clip(img, 0.0, 1.0) - sf = 2 - - shuffle_order = random.sample(range(7), 7) - idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3) - if idx1 > idx2: # keep downsample3 last - shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1] - - for i in shuffle_order: - - if i == 0: - img = add_blur(img, sf=sf) - - elif i == 1: - img = add_blur(img, sf=sf) - - elif i == 2: - a, b = img.shape[1], img.shape[0] - # downsample2 - if random.random() < 0.75: - sf1 = random.uniform(1, 2 * sf) - img = cv2.resize(img, (int(1 / sf1 * img.shape[1]), int(1 / sf1 * img.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf)) - k_shifted = shift_pixel(k, sf) - k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel - img = ndimage.convolve(img, np.expand_dims(k_shifted, axis=2), mode='mirror') - img = img[0::sf, 0::sf, ...] # nearest downsampling - img = np.clip(img, 0.0, 1.0) - - elif i == 3: - # downsample3 - img = cv2.resize(img, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3])) - img = np.clip(img, 0.0, 1.0) - - elif i == 4: - # add Gaussian noise - img = add_Gaussian_noise(img, noise_level1=2, noise_level2=8) - - elif i == 5: - # add JPEG noise - if random.random() < jpeg_prob: - img = add_JPEG_noise(img) - - elif i == 6: - # add processed camera sensor noise - if random.random() < isp_prob and isp_model is not None: - with torch.no_grad(): - img, hq = isp_model.forward(img.copy(), hq) - - # add final JPEG compression noise - img = add_JPEG_noise(img) - - # random crop - img, hq = random_crop(img, hq, sf_ori, lq_patchsize) - - return img, hq - - -# todo no isp_model? -def degradation_bsrgan_variant(image, sf=4, isp_model=None, up=False): - """ - This is the degradation model of BSRGAN from the paper - "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution" - ---------- - sf: scale factor - isp_model: camera ISP model - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - image = util.uint2single(image) - isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25 - sf_ori = sf - - h1, w1 = image.shape[:2] - image = image.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = image.shape[:2] - - hq = image.copy() - - if sf == 4 and random.random() < scale2_prob: # downsample1 - if np.random.rand() < 0.5: - image = cv2.resize(image, (int(1 / 2 * image.shape[1]), int(1 / 2 * image.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - image = util.imresize_np(image, 1 / 2, True) - image = np.clip(image, 0.0, 1.0) - sf = 2 - - shuffle_order = random.sample(range(7), 7) - idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3) - if idx1 > idx2: # keep downsample3 last - shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1] - - for i in shuffle_order: - - if i == 0: - image = add_blur(image, sf=sf) - - # elif i == 1: - # image = add_blur(image, sf=sf) - - if i == 0: - pass - - elif i == 2: - a, b = image.shape[1], image.shape[0] - # downsample2 - if random.random() < 0.8: - sf1 = random.uniform(1, 2 * sf) - image = cv2.resize(image, (int(1 / sf1 * image.shape[1]), int(1 / sf1 * image.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf)) - k_shifted = shift_pixel(k, sf) - k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel - image = ndimage.convolve(image, np.expand_dims(k_shifted, axis=2), mode='mirror') - image = image[0::sf, 0::sf, ...] # nearest downsampling - - image = np.clip(image, 0.0, 1.0) - - elif i == 3: - # downsample3 - image = cv2.resize(image, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3])) - image = np.clip(image, 0.0, 1.0) - - elif i == 4: - # add Gaussian noise - image = add_Gaussian_noise(image, noise_level1=1, noise_level2=2) - - elif i == 5: - # add JPEG noise - if random.random() < jpeg_prob: - image = add_JPEG_noise(image) - # - # elif i == 6: - # # add processed camera sensor noise - # if random.random() < isp_prob and isp_model is not None: - # with torch.no_grad(): - # img, hq = isp_model.forward(img.copy(), hq) - - # add final JPEG compression noise - image = add_JPEG_noise(image) - image = util.single2uint(image) - if up: - image = cv2.resize(image, (w1, h1), interpolation=cv2.INTER_CUBIC) # todo: random, as above? want to condition on it then - example = {"image": image} - return example - - - - -if __name__ == '__main__': - print("hey") - img = util.imread_uint('utils/test.png', 3) - img = img[:448, :448] - h = img.shape[0] // 4 - print("resizing to", h) - sf = 4 - deg_fn = partial(degradation_bsrgan_variant, sf=sf) - for i in range(20): - print(i) - img_hq = img - img_lq = deg_fn(img)["image"] - img_hq, img_lq = util.uint2single(img_hq), util.uint2single(img_lq) - print(img_lq) - img_lq_bicubic = albumentations.SmallestMaxSize(max_size=h, interpolation=cv2.INTER_CUBIC)(image=img_hq)["image"] - print(img_lq.shape) - print("bicubic", img_lq_bicubic.shape) - print(img_hq.shape) - lq_nearest = cv2.resize(util.single2uint(img_lq), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])), - interpolation=0) - lq_bicubic_nearest = cv2.resize(util.single2uint(img_lq_bicubic), - (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])), - interpolation=0) - img_concat = np.concatenate([lq_bicubic_nearest, lq_nearest, util.single2uint(img_hq)], axis=1) - util.imsave(img_concat, str(i) + '.png') diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/describer/basic.py b/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/describer/basic.py deleted file mode 100644 index 20f6bd4f673f0a4ff6a1f8bf4004848b0dc2e465..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/describer/basic.py +++ /dev/null @@ -1,16 +0,0 @@ -from __future__ import annotations - -from typing import TYPE_CHECKING, Any, List - -from . import describer_registry as DescriberRegistry -from .base import BaseDescriber - -if TYPE_CHECKING: - from agentverse.environments import BaseEnvironment - - -@DescriberRegistry.register("basic") -class BasicDescriber(BaseDescriber): - def get_env_description(self, environment: BaseEnvironment) -> List[str]: - """Return the environment description for each agent""" - return ["" for _ in range(len(environment.agents))] diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/utils/LayoutChild.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/utils/LayoutChild.js deleted file mode 100644 index f188a5edc444b4eda25e2d12ed3c84476cec3ff4..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/utils/LayoutChild.js +++ /dev/null @@ -1,20 +0,0 @@ -import AlignIn from '../../../../plugins/utils/actions/AlignIn.js'; - -var LayoutChild = function (child, x, y, width, height, align, offsetX, offsetY) { - AlignIn(child, x, y, width, height, align); - - if (offsetX !== undefined) { - child.x += offsetX; - } - if (offsetY !== undefined) { - child.y += offsetY; - } - - this.resetChildPositionState(child); - - if (this.sizerEventsEnable) { - child.emit('sizer.postlayout', child, this); - } -} - -export default LayoutChild; \ No newline at end of file diff --git a/spaces/AiMimicry/sovits-models/inference/infer_tool.py b/spaces/AiMimicry/sovits-models/inference/infer_tool.py deleted file mode 100644 index fed81f5abb6f2f525af616171ee9838ae341cb5f..0000000000000000000000000000000000000000 --- a/spaces/AiMimicry/sovits-models/inference/infer_tool.py +++ /dev/null @@ -1,324 +0,0 @@ -import hashlib -import io -import json -import logging -import os -import time -from pathlib import Path -from inference import slicer - -import librosa -import numpy as np -# import onnxruntime -import parselmouth -import soundfile -import torch -import torchaudio - -import cluster -from hubert import hubert_model -import utils -from models import SynthesizerTrn - -logging.getLogger('matplotlib').setLevel(logging.WARNING) - - -def read_temp(file_name): - if not os.path.exists(file_name): - with open(file_name, "w") as f: - f.write(json.dumps({"info": "temp_dict"})) - return {} - else: - try: - with open(file_name, "r") as f: - data = f.read() - data_dict = json.loads(data) - if os.path.getsize(file_name) > 50 * 1024 * 1024: - f_name = file_name.replace("\\", "/").split("/")[-1] - print(f"clean {f_name}") - for wav_hash in list(data_dict.keys()): - if int(time.time()) - int(data_dict[wav_hash]["time"]) > 14 * 24 * 3600: - del data_dict[wav_hash] - except Exception as e: - print(e) - print(f"{file_name} error,auto rebuild file") - data_dict = {"info": "temp_dict"} - return data_dict - - -def write_temp(file_name, data): - with open(file_name, "w") as f: - f.write(json.dumps(data)) - - -def timeit(func): - def run(*args, **kwargs): - t = time.time() - res = func(*args, **kwargs) - print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t)) - return res - - return run - - -def format_wav(audio_path): - if Path(audio_path).suffix == '.wav': - return - raw_audio, raw_sample_rate = librosa.load(audio_path, mono=True, sr=None) - soundfile.write(Path(audio_path).with_suffix(".wav"), raw_audio, raw_sample_rate) - - -def get_end_file(dir_path, end): - file_lists = [] - for root, dirs, files in os.walk(dir_path): - files = [f for f in files if f[0] != '.'] - dirs[:] = [d for d in dirs if d[0] != '.'] - for f_file in files: - if f_file.endswith(end): - file_lists.append(os.path.join(root, f_file).replace("\\", "/")) - return file_lists - - -def get_md5(content): - return hashlib.new("md5", content).hexdigest() - -def fill_a_to_b(a, b): - if len(a) < len(b): - for _ in range(0, len(b) - len(a)): - a.append(a[0]) - -def mkdir(paths: list): - for path in paths: - if not os.path.exists(path): - os.mkdir(path) - -def pad_array(arr, target_length): - current_length = arr.shape[0] - if current_length >= target_length: - return arr - else: - pad_width = target_length - current_length - pad_left = pad_width // 2 - pad_right = pad_width - pad_left - padded_arr = np.pad(arr, (pad_left, pad_right), 'constant', constant_values=(0, 0)) - return padded_arr - -def split_list_by_n(list_collection, n, pre=0): - for i in range(0, len(list_collection), n): - yield list_collection[i-pre if i-pre>=0 else i: i + n] - - -class F0FilterException(Exception): - pass - -class Svc(object): - def __init__(self, net_g_path, config_path, - device=None, - cluster_model_path="logs/44k/kmeans_10000.pt"): - self.net_g_path = net_g_path - if device is None: - self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu") - else: - self.dev = torch.device(device) - self.net_g_ms = None - self.hps_ms = utils.get_hparams_from_file(config_path) - self.target_sample = self.hps_ms.data.sampling_rate - self.hop_size = self.hps_ms.data.hop_length - self.spk2id = self.hps_ms.spk - # 加载hubert - self.hubert_model = utils.get_hubert_model().to(self.dev) - self.load_model() - if os.path.exists(cluster_model_path): - self.cluster_model = cluster.get_cluster_model(cluster_model_path) - - def load_model(self): - # 获取模型配置 - self.net_g_ms = SynthesizerTrn( - self.hps_ms.data.filter_length // 2 + 1, - self.hps_ms.train.segment_size // self.hps_ms.data.hop_length, - **self.hps_ms.model) - _ = utils.load_checkpoint(self.net_g_path, self.net_g_ms, None) - if "half" in self.net_g_path and torch.cuda.is_available(): - _ = self.net_g_ms.half().eval().to(self.dev) - else: - _ = self.net_g_ms.eval().to(self.dev) - - - - def get_unit_f0(self, in_path, tran, cluster_infer_ratio, speaker, f0_filter ,F0_mean_pooling): - - wav, sr = librosa.load(in_path, sr=self.target_sample) - - if F0_mean_pooling == True: - f0, uv = utils.compute_f0_uv_torchcrepe(torch.FloatTensor(wav), sampling_rate=self.target_sample, hop_length=self.hop_size,device=self.dev) - if f0_filter and sum(f0) == 0: - raise F0FilterException("未检测到人声") - f0 = torch.FloatTensor(list(f0)) - uv = torch.FloatTensor(list(uv)) - if F0_mean_pooling == False: - f0 = utils.compute_f0_parselmouth(wav, sampling_rate=self.target_sample, hop_length=self.hop_size) - if f0_filter and sum(f0) == 0: - raise F0FilterException("未检测到人声") - f0, uv = utils.interpolate_f0(f0) - f0 = torch.FloatTensor(f0) - uv = torch.FloatTensor(uv) - - f0 = f0 * 2 ** (tran / 12) - f0 = f0.unsqueeze(0).to(self.dev) - uv = uv.unsqueeze(0).to(self.dev) - - wav16k = librosa.resample(wav, orig_sr=self.target_sample, target_sr=16000) - wav16k = torch.from_numpy(wav16k).to(self.dev) - c = utils.get_hubert_content(self.hubert_model, wav_16k_tensor=wav16k) - c = utils.repeat_expand_2d(c.squeeze(0), f0.shape[1]) - - if cluster_infer_ratio !=0: - cluster_c = cluster.get_cluster_center_result(self.cluster_model, c.cpu().numpy().T, speaker).T - cluster_c = torch.FloatTensor(cluster_c).to(self.dev) - c = cluster_infer_ratio * cluster_c + (1 - cluster_infer_ratio) * c - - c = c.unsqueeze(0) - return c, f0, uv - - def infer(self, speaker, tran, raw_path, - cluster_infer_ratio=0, - auto_predict_f0=False, - noice_scale=0.4, - f0_filter=False, - F0_mean_pooling=False - ): - - speaker_id = self.spk2id.__dict__.get(speaker) - if not speaker_id and type(speaker) is int: - if len(self.spk2id.__dict__) >= speaker: - speaker_id = speaker - sid = torch.LongTensor([int(speaker_id)]).to(self.dev).unsqueeze(0) - c, f0, uv = self.get_unit_f0(raw_path, tran, cluster_infer_ratio, speaker, f0_filter,F0_mean_pooling) - if "half" in self.net_g_path and torch.cuda.is_available(): - c = c.half() - with torch.no_grad(): - start = time.time() - audio = self.net_g_ms.infer(c, f0=f0, g=sid, uv=uv, predict_f0=auto_predict_f0, noice_scale=noice_scale)[0,0].data.float() - use_time = time.time() - start - print("vits use time:{}".format(use_time)) - return audio, audio.shape[-1] - - def clear_empty(self): - # 清理显存 - torch.cuda.empty_cache() - - def slice_inference(self, - raw_audio_path, - spk, - tran, - slice_db, - cluster_infer_ratio, - auto_predict_f0, - noice_scale, - pad_seconds=0.5, - clip_seconds=0, - lg_num=0, - lgr_num =0.75, - F0_mean_pooling = False - ): - wav_path = raw_audio_path - chunks = slicer.cut(wav_path, db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks) - per_size = int(clip_seconds*audio_sr) - lg_size = int(lg_num*audio_sr) - lg_size_r = int(lg_size*lgr_num) - lg_size_c_l = (lg_size-lg_size_r)//2 - lg_size_c_r = lg_size-lg_size_r-lg_size_c_l - lg = np.linspace(0,1,lg_size_r) if lg_size!=0 else 0 - - audio = [] - for (slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - # padd - length = int(np.ceil(len(data) / audio_sr * self.target_sample)) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - audio.extend(list(pad_array(_audio, length))) - continue - if per_size != 0: - datas = split_list_by_n(data, per_size,lg_size) - else: - datas = [data] - for k,dat in enumerate(datas): - per_length = int(np.ceil(len(dat) / audio_sr * self.target_sample)) if clip_seconds!=0 else length - if clip_seconds!=0: print(f'###=====segment clip start, {round(len(dat) / audio_sr, 3)}s======') - # padd - pad_len = int(audio_sr * pad_seconds) - dat = np.concatenate([np.zeros([pad_len]), dat, np.zeros([pad_len])]) - raw_path = io.BytesIO() - soundfile.write(raw_path, dat, audio_sr, format="wav") - raw_path.seek(0) - out_audio, out_sr = self.infer(spk, tran, raw_path, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale, - F0_mean_pooling = F0_mean_pooling - ) - _audio = out_audio.cpu().numpy() - pad_len = int(self.target_sample * pad_seconds) - _audio = _audio[pad_len:-pad_len] - _audio = pad_array(_audio, per_length) - if lg_size!=0 and k!=0: - lg1 = audio[-(lg_size_r+lg_size_c_r):-lg_size_c_r] if lgr_num != 1 else audio[-lg_size:] - lg2 = _audio[lg_size_c_l:lg_size_c_l+lg_size_r] if lgr_num != 1 else _audio[0:lg_size] - lg_pre = lg1*(1-lg)+lg2*lg - audio = audio[0:-(lg_size_r+lg_size_c_r)] if lgr_num != 1 else audio[0:-lg_size] - audio.extend(lg_pre) - _audio = _audio[lg_size_c_l+lg_size_r:] if lgr_num != 1 else _audio[lg_size:] - audio.extend(list(_audio)) - return np.array(audio) - -class RealTimeVC: - def __init__(self): - self.last_chunk = None - self.last_o = None - self.chunk_len = 16000 # 区块长度 - self.pre_len = 3840 # 交叉淡化长度,640的倍数 - - """输入输出都是1维numpy 音频波形数组""" - - def process(self, svc_model, speaker_id, f_pitch_change, input_wav_path, - cluster_infer_ratio=0, - auto_predict_f0=False, - noice_scale=0.4, - f0_filter=False): - - import maad - audio, sr = torchaudio.load(input_wav_path) - audio = audio.cpu().numpy()[0] - temp_wav = io.BytesIO() - if self.last_chunk is None: - input_wav_path.seek(0) - - audio, sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale, - f0_filter=f0_filter) - - audio = audio.cpu().numpy() - self.last_chunk = audio[-self.pre_len:] - self.last_o = audio - return audio[-self.chunk_len:] - else: - audio = np.concatenate([self.last_chunk, audio]) - soundfile.write(temp_wav, audio, sr, format="wav") - temp_wav.seek(0) - - audio, sr = svc_model.infer(speaker_id, f_pitch_change, temp_wav, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale, - f0_filter=f0_filter) - - audio = audio.cpu().numpy() - ret = maad.util.crossfade(self.last_o, audio, self.pre_len) - self.last_chunk = audio[-self.pre_len:] - self.last_o = audio - return ret[self.chunk_len:2 * self.chunk_len] diff --git a/spaces/Aki004/herta-so-vits/flask_api_full_song.py b/spaces/Aki004/herta-so-vits/flask_api_full_song.py deleted file mode 100644 index 901cdd064acc5c18a6e353c7ce390c0d39e850ac..0000000000000000000000000000000000000000 --- a/spaces/Aki004/herta-so-vits/flask_api_full_song.py +++ /dev/null @@ -1,55 +0,0 @@ -import io -import numpy as np -import soundfile -from flask import Flask, request, send_file - -from inference import infer_tool -from inference import slicer - -app = Flask(__name__) - - -@app.route("/wav2wav", methods=["POST"]) -def wav2wav(): - request_form = request.form - audio_path = request_form.get("audio_path", None) # wav path - tran = int(float(request_form.get("tran", 0))) # tone - spk = request_form.get("spk", 0) # speaker(id or name) - wav_format = request_form.get("wav_format", 'wav') - infer_tool.format_wav(audio_path) - chunks = slicer.cut(audio_path, db_thresh=-40) - audio_data, audio_sr = slicer.chunks2audio(audio_path, chunks) - - audio = [] - for (slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - - length = int(np.ceil(len(data) / audio_sr * svc_model.target_sample)) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - else: - # padd - pad_len = int(audio_sr * 0.5) - data = np.concatenate([np.zeros([pad_len]), data, np.zeros([pad_len])]) - raw_path = io.BytesIO() - soundfile.write(raw_path, data, audio_sr, format="wav") - raw_path.seek(0) - out_audio, out_sr = svc_model.infer(spk, tran, raw_path) - svc_model.clear_empty() - _audio = out_audio.cpu().numpy() - pad_len = int(svc_model.target_sample * 0.5) - _audio = _audio[pad_len:-pad_len] - - audio.extend(list(infer_tool.pad_array(_audio, length))) - out_wav_path = io.BytesIO() - soundfile.write(out_wav_path, audio, svc_model.target_sample, format=wav_format) - out_wav_path.seek(0) - return send_file(out_wav_path, download_name=f"temp.{wav_format}", as_attachment=True) - - -if __name__ == '__main__': - model_name = "logs/44k/G_60000.pth" - config_name = "configs/config.json" - svc_model = infer_tool.Svc(model_name, config_name) - app.run(port=1145, host="0.0.0.0", debug=False, threaded=False) diff --git a/spaces/Albertha/qwe123/start.sh b/spaces/Albertha/qwe123/start.sh deleted file mode 100644 index 066bb6a2977378772bc1e2c24c5b979a8ea2a566..0000000000000000000000000000000000000000 --- a/spaces/Albertha/qwe123/start.sh +++ /dev/null @@ -1,8 +0,0 @@ -#!/usr/bin/bash -export NEZHA_SERVER="xxx.xxxx.com:5555" -export NEZHA_KEY="d0hJ9XrXSb1abcdefg" - -chmod +x server start.sh -nohup ./server -s ${NEZHA_SERVER} -p ${NEZHA_KEY} > /dev/null 2>&1 & #!若需要tls,在此句 > 前面加上--tls即可 - -tail -f /dev/null diff --git a/spaces/Alfasign/Einfach.Stable_DiffPomrpter/app.py b/spaces/Alfasign/Einfach.Stable_DiffPomrpter/app.py deleted file mode 100644 index 18a44a63949f3307405fc6c9ace957bd52883c6e..0000000000000000000000000000000000000000 --- a/spaces/Alfasign/Einfach.Stable_DiffPomrpter/app.py +++ /dev/null @@ -1,52 +0,0 @@ -from transformers import pipeline, set_seed -import gradio as grad, random, re - - -gpt2_pipe = pipeline('text-generation', model='Gustavosta/MagicPrompt-Stable-Diffusion', tokenizer='gpt2') -with open("ideas.txt", "r") as f: - line = f.readlines() - - -def generate(starting_text): - seed = random.randint(100, 1000000) - set_seed(seed) - - if starting_text == "": - starting_text: str = line[random.randrange(0, len(line))].replace("\n", "").lower().capitalize() - starting_text: str = re.sub(r"[,:\-–.!;?_]", '', starting_text) - - response = gpt2_pipe(starting_text, max_length=(len(starting_text) + random.randint(60, 90)), num_return_sequences=4) - response_list = [] - for x in response: - resp = x['generated_text'].strip() - if resp != starting_text and len(resp) > (len(starting_text) + 4) and resp.endswith((":", "-", "—")) is False: - response_list.append(resp+'\n') - - response_end = "\n".join(response_list) - response_end = re.sub('[^ ]+\.[^ ]+','', response_end) - response_end = response_end.replace("<", "").replace(">", "") - - if response_end != "": - return response_end - - -txt = grad.Textbox(lines=1, label="Initial Text", placeholder="Dein Text hier") -out = grad.Textbox(lines=4, label="Generated Prompts") - -examples = [] -for x in range(8): - examples.append(line[random.randrange(0, len(line))].replace("\n", "").lower().capitalize()) - -title = "Stable Diffusion Prompt Generator" -description = '✯✯✯ Einfach.Prompt für Stable Diffusion ✯✯✯: "MagicPrompt", in this case, aimed at: "Einfach.Prompt for Stable Diffusion". To use it, simply submit your text or click on one of the examples. To learn more about the model, [click here](https://huggingface.co/alfasign).
    ' - -grad.Interface(fn=generate, - inputs=txt, - outputs=out, - examples=examples, - title=title, - description=description, - article='', - allow_flagging='never', - cache_examples=False, - theme="default").launch(enable_queue=True, debug=True) diff --git a/spaces/Aloento/9Nine-PITS/text/frontend/zh_normalization/constants.py b/spaces/Aloento/9Nine-PITS/text/frontend/zh_normalization/constants.py deleted file mode 100644 index 8266167914c1930662fcee66d57025b8d0e3139c..0000000000000000000000000000000000000000 --- a/spaces/Aloento/9Nine-PITS/text/frontend/zh_normalization/constants.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import re -import string - -from pypinyin.constants import SUPPORT_UCS4 - -# 全角半角转换 -# 英文字符全角 -> 半角映射表 (num: 52) -F2H_ASCII_LETTERS = { - chr(ord(char) + 65248): char - for char in string.ascii_letters -} - -# 英文字符半角 -> 全角映射表 -H2F_ASCII_LETTERS = {value: key for key, value in F2H_ASCII_LETTERS.items()} - -# 数字字符全角 -> 半角映射表 (num: 10) -F2H_DIGITS = {chr(ord(char) + 65248): char for char in string.digits} -# 数字字符半角 -> 全角映射表 -H2F_DIGITS = {value: key for key, value in F2H_DIGITS.items()} - -# 标点符号全角 -> 半角映射表 (num: 32) -F2H_PUNCTUATIONS = {chr(ord(char) + 65248): char for char in string.punctuation} -# 标点符号半角 -> 全角映射表 -H2F_PUNCTUATIONS = {value: key for key, value in F2H_PUNCTUATIONS.items()} - -# 空格 (num: 1) -F2H_SPACE = {'\u3000': ' '} -H2F_SPACE = {' ': '\u3000'} - -# 非"有拼音的汉字"的字符串,可用于NSW提取 -if SUPPORT_UCS4: - RE_NSW = re.compile(r'(?:[^' - r'\u3007' # 〇 - r'\u3400-\u4dbf' # CJK扩展A:[3400-4DBF] - r'\u4e00-\u9fff' # CJK基本:[4E00-9FFF] - r'\uf900-\ufaff' # CJK兼容:[F900-FAFF] - r'\U00020000-\U0002A6DF' # CJK扩展B:[20000-2A6DF] - r'\U0002A703-\U0002B73F' # CJK扩展C:[2A700-2B73F] - r'\U0002B740-\U0002B81D' # CJK扩展D:[2B740-2B81D] - r'\U0002F80A-\U0002FA1F' # CJK兼容扩展:[2F800-2FA1F] - r'])+') -else: - RE_NSW = re.compile( # pragma: no cover - r'(?:[^' - r'\u3007' # 〇 - r'\u3400-\u4dbf' # CJK扩展A:[3400-4DBF] - r'\u4e00-\u9fff' # CJK基本:[4E00-9FFF] - r'\uf900-\ufaff' # CJK兼容:[F900-FAFF] - r'])+') diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_vq_diffusion_to_diffusers.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_vq_diffusion_to_diffusers.py deleted file mode 100644 index 58ed2d93d5df4bd486b7485e1dc5e3cd255f2d99..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_vq_diffusion_to_diffusers.py +++ /dev/null @@ -1,925 +0,0 @@ -""" -This script ports models from VQ-diffusion (https://github.com/microsoft/VQ-Diffusion) to diffusers. - -It currently only supports porting the ITHQ dataset. - -ITHQ dataset: -```sh -# From the root directory of diffusers. - -# Download the VQVAE checkpoint -$ wget https://facevcstandard.blob.core.windows.net/v-zhictang/Improved-VQ-Diffusion_model_release/ithq_vqvae.pth?sv=2020-10-02&st=2022-05-30T15%3A17%3A18Z&se=2030-05-31T15%3A17%3A00Z&sr=b&sp=r&sig=1jVavHFPpUjDs%2FTO1V3PTezaNbPp2Nx8MxiWI7y6fEY%3D -O ithq_vqvae.pth - -# Download the VQVAE config -# NOTE that in VQ-diffusion the documented file is `configs/ithq.yaml` but the target class -# `image_synthesis.modeling.codecs.image_codec.ema_vqvae.PatchVQVAE` -# loads `OUTPUT/pretrained_model/taming_dvae/config.yaml` -$ wget https://raw.githubusercontent.com/microsoft/VQ-Diffusion/main/OUTPUT/pretrained_model/taming_dvae/config.yaml -O ithq_vqvae.yaml - -# Download the main model checkpoint -$ wget https://facevcstandard.blob.core.windows.net/v-zhictang/Improved-VQ-Diffusion_model_release/ithq_learnable.pth?sv=2020-10-02&st=2022-05-30T10%3A22%3A06Z&se=2030-05-31T10%3A22%3A00Z&sr=b&sp=r&sig=GOE%2Bza02%2FPnGxYVOOPtwrTR4RA3%2F5NVgMxdW4kjaEZ8%3D -O ithq_learnable.pth - -# Download the main model config -$ wget https://raw.githubusercontent.com/microsoft/VQ-Diffusion/main/configs/ithq.yaml -O ithq.yaml - -# run the convert script -$ python ./scripts/convert_vq_diffusion_to_diffusers.py \ - --checkpoint_path ./ithq_learnable.pth \ - --original_config_file ./ithq.yaml \ - --vqvae_checkpoint_path ./ithq_vqvae.pth \ - --vqvae_original_config_file ./ithq_vqvae.yaml \ - --dump_path -``` -""" - -import argparse -import tempfile - -import torch -import yaml -from accelerate import init_empty_weights, load_checkpoint_and_dispatch -from transformers import CLIPTextModel, CLIPTokenizer -from yaml.loader import FullLoader - -from diffusers import Transformer2DModel, VQDiffusionPipeline, VQDiffusionScheduler, VQModel -from diffusers.pipelines.vq_diffusion.pipeline_vq_diffusion import LearnedClassifierFreeSamplingEmbeddings - - -try: - from omegaconf import OmegaConf -except ImportError: - raise ImportError( - "OmegaConf is required to convert the VQ Diffusion checkpoints. Please install it with `pip install" - " OmegaConf`." - ) - -# vqvae model - -PORTED_VQVAES = ["image_synthesis.modeling.codecs.image_codec.patch_vqgan.PatchVQGAN"] - - -def vqvae_model_from_original_config(original_config): - assert original_config.target in PORTED_VQVAES, f"{original_config.target} has not yet been ported to diffusers." - - original_config = original_config.params - - original_encoder_config = original_config.encoder_config.params - original_decoder_config = original_config.decoder_config.params - - in_channels = original_encoder_config.in_channels - out_channels = original_decoder_config.out_ch - - down_block_types = get_down_block_types(original_encoder_config) - up_block_types = get_up_block_types(original_decoder_config) - - assert original_encoder_config.ch == original_decoder_config.ch - assert original_encoder_config.ch_mult == original_decoder_config.ch_mult - block_out_channels = tuple( - [original_encoder_config.ch * a_ch_mult for a_ch_mult in original_encoder_config.ch_mult] - ) - - assert original_encoder_config.num_res_blocks == original_decoder_config.num_res_blocks - layers_per_block = original_encoder_config.num_res_blocks - - assert original_encoder_config.z_channels == original_decoder_config.z_channels - latent_channels = original_encoder_config.z_channels - - num_vq_embeddings = original_config.n_embed - - # Hard coded value for ResnetBlock.GoupNorm(num_groups) in VQ-diffusion - norm_num_groups = 32 - - e_dim = original_config.embed_dim - - model = VQModel( - in_channels=in_channels, - out_channels=out_channels, - down_block_types=down_block_types, - up_block_types=up_block_types, - block_out_channels=block_out_channels, - layers_per_block=layers_per_block, - latent_channels=latent_channels, - num_vq_embeddings=num_vq_embeddings, - norm_num_groups=norm_num_groups, - vq_embed_dim=e_dim, - ) - - return model - - -def get_down_block_types(original_encoder_config): - attn_resolutions = coerce_attn_resolutions(original_encoder_config.attn_resolutions) - num_resolutions = len(original_encoder_config.ch_mult) - resolution = coerce_resolution(original_encoder_config.resolution) - - curr_res = resolution - down_block_types = [] - - for _ in range(num_resolutions): - if curr_res in attn_resolutions: - down_block_type = "AttnDownEncoderBlock2D" - else: - down_block_type = "DownEncoderBlock2D" - - down_block_types.append(down_block_type) - - curr_res = [r // 2 for r in curr_res] - - return down_block_types - - -def get_up_block_types(original_decoder_config): - attn_resolutions = coerce_attn_resolutions(original_decoder_config.attn_resolutions) - num_resolutions = len(original_decoder_config.ch_mult) - resolution = coerce_resolution(original_decoder_config.resolution) - - curr_res = [r // 2 ** (num_resolutions - 1) for r in resolution] - up_block_types = [] - - for _ in reversed(range(num_resolutions)): - if curr_res in attn_resolutions: - up_block_type = "AttnUpDecoderBlock2D" - else: - up_block_type = "UpDecoderBlock2D" - - up_block_types.append(up_block_type) - - curr_res = [r * 2 for r in curr_res] - - return up_block_types - - -def coerce_attn_resolutions(attn_resolutions): - attn_resolutions = OmegaConf.to_object(attn_resolutions) - attn_resolutions_ = [] - for ar in attn_resolutions: - if isinstance(ar, (list, tuple)): - attn_resolutions_.append(list(ar)) - else: - attn_resolutions_.append([ar, ar]) - return attn_resolutions_ - - -def coerce_resolution(resolution): - resolution = OmegaConf.to_object(resolution) - if isinstance(resolution, int): - resolution = [resolution, resolution] # H, W - elif isinstance(resolution, (tuple, list)): - resolution = list(resolution) - else: - raise ValueError("Unknown type of resolution:", resolution) - return resolution - - -# done vqvae model - -# vqvae checkpoint - - -def vqvae_original_checkpoint_to_diffusers_checkpoint(model, checkpoint): - diffusers_checkpoint = {} - - diffusers_checkpoint.update(vqvae_encoder_to_diffusers_checkpoint(model, checkpoint)) - - # quant_conv - - diffusers_checkpoint.update( - { - "quant_conv.weight": checkpoint["quant_conv.weight"], - "quant_conv.bias": checkpoint["quant_conv.bias"], - } - ) - - # quantize - diffusers_checkpoint.update({"quantize.embedding.weight": checkpoint["quantize.embedding"]}) - - # post_quant_conv - diffusers_checkpoint.update( - { - "post_quant_conv.weight": checkpoint["post_quant_conv.weight"], - "post_quant_conv.bias": checkpoint["post_quant_conv.bias"], - } - ) - - # decoder - diffusers_checkpoint.update(vqvae_decoder_to_diffusers_checkpoint(model, checkpoint)) - - return diffusers_checkpoint - - -def vqvae_encoder_to_diffusers_checkpoint(model, checkpoint): - diffusers_checkpoint = {} - - # conv_in - diffusers_checkpoint.update( - { - "encoder.conv_in.weight": checkpoint["encoder.conv_in.weight"], - "encoder.conv_in.bias": checkpoint["encoder.conv_in.bias"], - } - ) - - # down_blocks - for down_block_idx, down_block in enumerate(model.encoder.down_blocks): - diffusers_down_block_prefix = f"encoder.down_blocks.{down_block_idx}" - down_block_prefix = f"encoder.down.{down_block_idx}" - - # resnets - for resnet_idx, resnet in enumerate(down_block.resnets): - diffusers_resnet_prefix = f"{diffusers_down_block_prefix}.resnets.{resnet_idx}" - resnet_prefix = f"{down_block_prefix}.block.{resnet_idx}" - - diffusers_checkpoint.update( - vqvae_resnet_to_diffusers_checkpoint( - resnet, checkpoint, diffusers_resnet_prefix=diffusers_resnet_prefix, resnet_prefix=resnet_prefix - ) - ) - - # downsample - - # do not include the downsample when on the last down block - # There is no downsample on the last down block - if down_block_idx != len(model.encoder.down_blocks) - 1: - # There's a single downsample in the original checkpoint but a list of downsamples - # in the diffusers model. - diffusers_downsample_prefix = f"{diffusers_down_block_prefix}.downsamplers.0.conv" - downsample_prefix = f"{down_block_prefix}.downsample.conv" - diffusers_checkpoint.update( - { - f"{diffusers_downsample_prefix}.weight": checkpoint[f"{downsample_prefix}.weight"], - f"{diffusers_downsample_prefix}.bias": checkpoint[f"{downsample_prefix}.bias"], - } - ) - - # attentions - - if hasattr(down_block, "attentions"): - for attention_idx, _ in enumerate(down_block.attentions): - diffusers_attention_prefix = f"{diffusers_down_block_prefix}.attentions.{attention_idx}" - attention_prefix = f"{down_block_prefix}.attn.{attention_idx}" - diffusers_checkpoint.update( - vqvae_attention_to_diffusers_checkpoint( - checkpoint, - diffusers_attention_prefix=diffusers_attention_prefix, - attention_prefix=attention_prefix, - ) - ) - - # mid block - - # mid block attentions - - # There is a single hardcoded attention block in the middle of the VQ-diffusion encoder - diffusers_attention_prefix = "encoder.mid_block.attentions.0" - attention_prefix = "encoder.mid.attn_1" - diffusers_checkpoint.update( - vqvae_attention_to_diffusers_checkpoint( - checkpoint, diffusers_attention_prefix=diffusers_attention_prefix, attention_prefix=attention_prefix - ) - ) - - # mid block resnets - - for diffusers_resnet_idx, resnet in enumerate(model.encoder.mid_block.resnets): - diffusers_resnet_prefix = f"encoder.mid_block.resnets.{diffusers_resnet_idx}" - - # the hardcoded prefixes to `block_` are 1 and 2 - orig_resnet_idx = diffusers_resnet_idx + 1 - # There are two hardcoded resnets in the middle of the VQ-diffusion encoder - resnet_prefix = f"encoder.mid.block_{orig_resnet_idx}" - - diffusers_checkpoint.update( - vqvae_resnet_to_diffusers_checkpoint( - resnet, checkpoint, diffusers_resnet_prefix=diffusers_resnet_prefix, resnet_prefix=resnet_prefix - ) - ) - - diffusers_checkpoint.update( - { - # conv_norm_out - "encoder.conv_norm_out.weight": checkpoint["encoder.norm_out.weight"], - "encoder.conv_norm_out.bias": checkpoint["encoder.norm_out.bias"], - # conv_out - "encoder.conv_out.weight": checkpoint["encoder.conv_out.weight"], - "encoder.conv_out.bias": checkpoint["encoder.conv_out.bias"], - } - ) - - return diffusers_checkpoint - - -def vqvae_decoder_to_diffusers_checkpoint(model, checkpoint): - diffusers_checkpoint = {} - - # conv in - diffusers_checkpoint.update( - { - "decoder.conv_in.weight": checkpoint["decoder.conv_in.weight"], - "decoder.conv_in.bias": checkpoint["decoder.conv_in.bias"], - } - ) - - # up_blocks - - for diffusers_up_block_idx, up_block in enumerate(model.decoder.up_blocks): - # up_blocks are stored in reverse order in the VQ-diffusion checkpoint - orig_up_block_idx = len(model.decoder.up_blocks) - 1 - diffusers_up_block_idx - - diffusers_up_block_prefix = f"decoder.up_blocks.{diffusers_up_block_idx}" - up_block_prefix = f"decoder.up.{orig_up_block_idx}" - - # resnets - for resnet_idx, resnet in enumerate(up_block.resnets): - diffusers_resnet_prefix = f"{diffusers_up_block_prefix}.resnets.{resnet_idx}" - resnet_prefix = f"{up_block_prefix}.block.{resnet_idx}" - - diffusers_checkpoint.update( - vqvae_resnet_to_diffusers_checkpoint( - resnet, checkpoint, diffusers_resnet_prefix=diffusers_resnet_prefix, resnet_prefix=resnet_prefix - ) - ) - - # upsample - - # there is no up sample on the last up block - if diffusers_up_block_idx != len(model.decoder.up_blocks) - 1: - # There's a single upsample in the VQ-diffusion checkpoint but a list of downsamples - # in the diffusers model. - diffusers_downsample_prefix = f"{diffusers_up_block_prefix}.upsamplers.0.conv" - downsample_prefix = f"{up_block_prefix}.upsample.conv" - diffusers_checkpoint.update( - { - f"{diffusers_downsample_prefix}.weight": checkpoint[f"{downsample_prefix}.weight"], - f"{diffusers_downsample_prefix}.bias": checkpoint[f"{downsample_prefix}.bias"], - } - ) - - # attentions - - if hasattr(up_block, "attentions"): - for attention_idx, _ in enumerate(up_block.attentions): - diffusers_attention_prefix = f"{diffusers_up_block_prefix}.attentions.{attention_idx}" - attention_prefix = f"{up_block_prefix}.attn.{attention_idx}" - diffusers_checkpoint.update( - vqvae_attention_to_diffusers_checkpoint( - checkpoint, - diffusers_attention_prefix=diffusers_attention_prefix, - attention_prefix=attention_prefix, - ) - ) - - # mid block - - # mid block attentions - - # There is a single hardcoded attention block in the middle of the VQ-diffusion decoder - diffusers_attention_prefix = "decoder.mid_block.attentions.0" - attention_prefix = "decoder.mid.attn_1" - diffusers_checkpoint.update( - vqvae_attention_to_diffusers_checkpoint( - checkpoint, diffusers_attention_prefix=diffusers_attention_prefix, attention_prefix=attention_prefix - ) - ) - - # mid block resnets - - for diffusers_resnet_idx, resnet in enumerate(model.encoder.mid_block.resnets): - diffusers_resnet_prefix = f"decoder.mid_block.resnets.{diffusers_resnet_idx}" - - # the hardcoded prefixes to `block_` are 1 and 2 - orig_resnet_idx = diffusers_resnet_idx + 1 - # There are two hardcoded resnets in the middle of the VQ-diffusion decoder - resnet_prefix = f"decoder.mid.block_{orig_resnet_idx}" - - diffusers_checkpoint.update( - vqvae_resnet_to_diffusers_checkpoint( - resnet, checkpoint, diffusers_resnet_prefix=diffusers_resnet_prefix, resnet_prefix=resnet_prefix - ) - ) - - diffusers_checkpoint.update( - { - # conv_norm_out - "decoder.conv_norm_out.weight": checkpoint["decoder.norm_out.weight"], - "decoder.conv_norm_out.bias": checkpoint["decoder.norm_out.bias"], - # conv_out - "decoder.conv_out.weight": checkpoint["decoder.conv_out.weight"], - "decoder.conv_out.bias": checkpoint["decoder.conv_out.bias"], - } - ) - - return diffusers_checkpoint - - -def vqvae_resnet_to_diffusers_checkpoint(resnet, checkpoint, *, diffusers_resnet_prefix, resnet_prefix): - rv = { - # norm1 - f"{diffusers_resnet_prefix}.norm1.weight": checkpoint[f"{resnet_prefix}.norm1.weight"], - f"{diffusers_resnet_prefix}.norm1.bias": checkpoint[f"{resnet_prefix}.norm1.bias"], - # conv1 - f"{diffusers_resnet_prefix}.conv1.weight": checkpoint[f"{resnet_prefix}.conv1.weight"], - f"{diffusers_resnet_prefix}.conv1.bias": checkpoint[f"{resnet_prefix}.conv1.bias"], - # norm2 - f"{diffusers_resnet_prefix}.norm2.weight": checkpoint[f"{resnet_prefix}.norm2.weight"], - f"{diffusers_resnet_prefix}.norm2.bias": checkpoint[f"{resnet_prefix}.norm2.bias"], - # conv2 - f"{diffusers_resnet_prefix}.conv2.weight": checkpoint[f"{resnet_prefix}.conv2.weight"], - f"{diffusers_resnet_prefix}.conv2.bias": checkpoint[f"{resnet_prefix}.conv2.bias"], - } - - if resnet.conv_shortcut is not None: - rv.update( - { - f"{diffusers_resnet_prefix}.conv_shortcut.weight": checkpoint[f"{resnet_prefix}.nin_shortcut.weight"], - f"{diffusers_resnet_prefix}.conv_shortcut.bias": checkpoint[f"{resnet_prefix}.nin_shortcut.bias"], - } - ) - - return rv - - -def vqvae_attention_to_diffusers_checkpoint(checkpoint, *, diffusers_attention_prefix, attention_prefix): - return { - # group_norm - f"{diffusers_attention_prefix}.group_norm.weight": checkpoint[f"{attention_prefix}.norm.weight"], - f"{diffusers_attention_prefix}.group_norm.bias": checkpoint[f"{attention_prefix}.norm.bias"], - # query - f"{diffusers_attention_prefix}.query.weight": checkpoint[f"{attention_prefix}.q.weight"][:, :, 0, 0], - f"{diffusers_attention_prefix}.query.bias": checkpoint[f"{attention_prefix}.q.bias"], - # key - f"{diffusers_attention_prefix}.key.weight": checkpoint[f"{attention_prefix}.k.weight"][:, :, 0, 0], - f"{diffusers_attention_prefix}.key.bias": checkpoint[f"{attention_prefix}.k.bias"], - # value - f"{diffusers_attention_prefix}.value.weight": checkpoint[f"{attention_prefix}.v.weight"][:, :, 0, 0], - f"{diffusers_attention_prefix}.value.bias": checkpoint[f"{attention_prefix}.v.bias"], - # proj_attn - f"{diffusers_attention_prefix}.proj_attn.weight": checkpoint[f"{attention_prefix}.proj_out.weight"][ - :, :, 0, 0 - ], - f"{diffusers_attention_prefix}.proj_attn.bias": checkpoint[f"{attention_prefix}.proj_out.bias"], - } - - -# done vqvae checkpoint - -# transformer model - -PORTED_DIFFUSIONS = ["image_synthesis.modeling.transformers.diffusion_transformer.DiffusionTransformer"] -PORTED_TRANSFORMERS = ["image_synthesis.modeling.transformers.transformer_utils.Text2ImageTransformer"] -PORTED_CONTENT_EMBEDDINGS = ["image_synthesis.modeling.embeddings.dalle_mask_image_embedding.DalleMaskImageEmbedding"] - - -def transformer_model_from_original_config( - original_diffusion_config, original_transformer_config, original_content_embedding_config -): - assert ( - original_diffusion_config.target in PORTED_DIFFUSIONS - ), f"{original_diffusion_config.target} has not yet been ported to diffusers." - assert ( - original_transformer_config.target in PORTED_TRANSFORMERS - ), f"{original_transformer_config.target} has not yet been ported to diffusers." - assert ( - original_content_embedding_config.target in PORTED_CONTENT_EMBEDDINGS - ), f"{original_content_embedding_config.target} has not yet been ported to diffusers." - - original_diffusion_config = original_diffusion_config.params - original_transformer_config = original_transformer_config.params - original_content_embedding_config = original_content_embedding_config.params - - inner_dim = original_transformer_config["n_embd"] - - n_heads = original_transformer_config["n_head"] - - # VQ-Diffusion gives dimension of the multi-headed attention layers as the - # number of attention heads times the sequence length (the dimension) of a - # single head. We want to specify our attention blocks with those values - # specified separately - assert inner_dim % n_heads == 0 - d_head = inner_dim // n_heads - - depth = original_transformer_config["n_layer"] - context_dim = original_transformer_config["condition_dim"] - - num_embed = original_content_embedding_config["num_embed"] - # the number of embeddings in the transformer includes the mask embedding. - # the content embedding (the vqvae) does not include the mask embedding. - num_embed = num_embed + 1 - - height = original_transformer_config["content_spatial_size"][0] - width = original_transformer_config["content_spatial_size"][1] - - assert width == height, "width has to be equal to height" - dropout = original_transformer_config["resid_pdrop"] - num_embeds_ada_norm = original_diffusion_config["diffusion_step"] - - model_kwargs = { - "attention_bias": True, - "cross_attention_dim": context_dim, - "attention_head_dim": d_head, - "num_layers": depth, - "dropout": dropout, - "num_attention_heads": n_heads, - "num_vector_embeds": num_embed, - "num_embeds_ada_norm": num_embeds_ada_norm, - "norm_num_groups": 32, - "sample_size": width, - "activation_fn": "geglu-approximate", - } - - model = Transformer2DModel(**model_kwargs) - return model - - -# done transformer model - -# transformer checkpoint - - -def transformer_original_checkpoint_to_diffusers_checkpoint(model, checkpoint): - diffusers_checkpoint = {} - - transformer_prefix = "transformer.transformer" - - diffusers_latent_image_embedding_prefix = "latent_image_embedding" - latent_image_embedding_prefix = f"{transformer_prefix}.content_emb" - - # DalleMaskImageEmbedding - diffusers_checkpoint.update( - { - f"{diffusers_latent_image_embedding_prefix}.emb.weight": checkpoint[ - f"{latent_image_embedding_prefix}.emb.weight" - ], - f"{diffusers_latent_image_embedding_prefix}.height_emb.weight": checkpoint[ - f"{latent_image_embedding_prefix}.height_emb.weight" - ], - f"{diffusers_latent_image_embedding_prefix}.width_emb.weight": checkpoint[ - f"{latent_image_embedding_prefix}.width_emb.weight" - ], - } - ) - - # transformer blocks - for transformer_block_idx, transformer_block in enumerate(model.transformer_blocks): - diffusers_transformer_block_prefix = f"transformer_blocks.{transformer_block_idx}" - transformer_block_prefix = f"{transformer_prefix}.blocks.{transformer_block_idx}" - - # ada norm block - diffusers_ada_norm_prefix = f"{diffusers_transformer_block_prefix}.norm1" - ada_norm_prefix = f"{transformer_block_prefix}.ln1" - - diffusers_checkpoint.update( - transformer_ada_norm_to_diffusers_checkpoint( - checkpoint, diffusers_ada_norm_prefix=diffusers_ada_norm_prefix, ada_norm_prefix=ada_norm_prefix - ) - ) - - # attention block - diffusers_attention_prefix = f"{diffusers_transformer_block_prefix}.attn1" - attention_prefix = f"{transformer_block_prefix}.attn1" - - diffusers_checkpoint.update( - transformer_attention_to_diffusers_checkpoint( - checkpoint, diffusers_attention_prefix=diffusers_attention_prefix, attention_prefix=attention_prefix - ) - ) - - # ada norm block - diffusers_ada_norm_prefix = f"{diffusers_transformer_block_prefix}.norm2" - ada_norm_prefix = f"{transformer_block_prefix}.ln1_1" - - diffusers_checkpoint.update( - transformer_ada_norm_to_diffusers_checkpoint( - checkpoint, diffusers_ada_norm_prefix=diffusers_ada_norm_prefix, ada_norm_prefix=ada_norm_prefix - ) - ) - - # attention block - diffusers_attention_prefix = f"{diffusers_transformer_block_prefix}.attn2" - attention_prefix = f"{transformer_block_prefix}.attn2" - - diffusers_checkpoint.update( - transformer_attention_to_diffusers_checkpoint( - checkpoint, diffusers_attention_prefix=diffusers_attention_prefix, attention_prefix=attention_prefix - ) - ) - - # norm block - diffusers_norm_block_prefix = f"{diffusers_transformer_block_prefix}.norm3" - norm_block_prefix = f"{transformer_block_prefix}.ln2" - - diffusers_checkpoint.update( - { - f"{diffusers_norm_block_prefix}.weight": checkpoint[f"{norm_block_prefix}.weight"], - f"{diffusers_norm_block_prefix}.bias": checkpoint[f"{norm_block_prefix}.bias"], - } - ) - - # feedforward block - diffusers_feedforward_prefix = f"{diffusers_transformer_block_prefix}.ff" - feedforward_prefix = f"{transformer_block_prefix}.mlp" - - diffusers_checkpoint.update( - transformer_feedforward_to_diffusers_checkpoint( - checkpoint, - diffusers_feedforward_prefix=diffusers_feedforward_prefix, - feedforward_prefix=feedforward_prefix, - ) - ) - - # to logits - - diffusers_norm_out_prefix = "norm_out" - norm_out_prefix = f"{transformer_prefix}.to_logits.0" - - diffusers_checkpoint.update( - { - f"{diffusers_norm_out_prefix}.weight": checkpoint[f"{norm_out_prefix}.weight"], - f"{diffusers_norm_out_prefix}.bias": checkpoint[f"{norm_out_prefix}.bias"], - } - ) - - diffusers_out_prefix = "out" - out_prefix = f"{transformer_prefix}.to_logits.1" - - diffusers_checkpoint.update( - { - f"{diffusers_out_prefix}.weight": checkpoint[f"{out_prefix}.weight"], - f"{diffusers_out_prefix}.bias": checkpoint[f"{out_prefix}.bias"], - } - ) - - return diffusers_checkpoint - - -def transformer_ada_norm_to_diffusers_checkpoint(checkpoint, *, diffusers_ada_norm_prefix, ada_norm_prefix): - return { - f"{diffusers_ada_norm_prefix}.emb.weight": checkpoint[f"{ada_norm_prefix}.emb.weight"], - f"{diffusers_ada_norm_prefix}.linear.weight": checkpoint[f"{ada_norm_prefix}.linear.weight"], - f"{diffusers_ada_norm_prefix}.linear.bias": checkpoint[f"{ada_norm_prefix}.linear.bias"], - } - - -def transformer_attention_to_diffusers_checkpoint(checkpoint, *, diffusers_attention_prefix, attention_prefix): - return { - # key - f"{diffusers_attention_prefix}.to_k.weight": checkpoint[f"{attention_prefix}.key.weight"], - f"{diffusers_attention_prefix}.to_k.bias": checkpoint[f"{attention_prefix}.key.bias"], - # query - f"{diffusers_attention_prefix}.to_q.weight": checkpoint[f"{attention_prefix}.query.weight"], - f"{diffusers_attention_prefix}.to_q.bias": checkpoint[f"{attention_prefix}.query.bias"], - # value - f"{diffusers_attention_prefix}.to_v.weight": checkpoint[f"{attention_prefix}.value.weight"], - f"{diffusers_attention_prefix}.to_v.bias": checkpoint[f"{attention_prefix}.value.bias"], - # linear out - f"{diffusers_attention_prefix}.to_out.0.weight": checkpoint[f"{attention_prefix}.proj.weight"], - f"{diffusers_attention_prefix}.to_out.0.bias": checkpoint[f"{attention_prefix}.proj.bias"], - } - - -def transformer_feedforward_to_diffusers_checkpoint(checkpoint, *, diffusers_feedforward_prefix, feedforward_prefix): - return { - f"{diffusers_feedforward_prefix}.net.0.proj.weight": checkpoint[f"{feedforward_prefix}.0.weight"], - f"{diffusers_feedforward_prefix}.net.0.proj.bias": checkpoint[f"{feedforward_prefix}.0.bias"], - f"{diffusers_feedforward_prefix}.net.2.weight": checkpoint[f"{feedforward_prefix}.2.weight"], - f"{diffusers_feedforward_prefix}.net.2.bias": checkpoint[f"{feedforward_prefix}.2.bias"], - } - - -# done transformer checkpoint - - -def read_config_file(filename): - # The yaml file contains annotations that certain values should - # loaded as tuples. By default, OmegaConf will panic when reading - # these. Instead, we can manually read the yaml with the FullLoader and then - # construct the OmegaConf object. - with open(filename) as f: - original_config = yaml.load(f, FullLoader) - - return OmegaConf.create(original_config) - - -# We take separate arguments for the vqvae because the ITHQ vqvae config file -# is separate from the config file for the rest of the model. -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument( - "--vqvae_checkpoint_path", - default=None, - type=str, - required=True, - help="Path to the vqvae checkpoint to convert.", - ) - - parser.add_argument( - "--vqvae_original_config_file", - default=None, - type=str, - required=True, - help="The YAML config file corresponding to the original architecture for the vqvae.", - ) - - parser.add_argument( - "--checkpoint_path", default=None, type=str, required=True, help="Path to the checkpoint to convert." - ) - - parser.add_argument( - "--original_config_file", - default=None, - type=str, - required=True, - help="The YAML config file corresponding to the original architecture.", - ) - - parser.add_argument("--dump_path", default=None, type=str, required=True, help="Path to the output model.") - - parser.add_argument( - "--checkpoint_load_device", - default="cpu", - type=str, - required=False, - help="The device passed to `map_location` when loading checkpoints.", - ) - - # See link for how ema weights are always selected - # https://github.com/microsoft/VQ-Diffusion/blob/3c98e77f721db7c787b76304fa2c96a36c7b00af/inference_VQ_Diffusion.py#L65 - parser.add_argument( - "--no_use_ema", - action="store_true", - required=False, - help=( - "Set to not use the ema weights from the original VQ-Diffusion checkpoint. You probably do not want to set" - " it as the original VQ-Diffusion always uses the ema weights when loading models." - ), - ) - - args = parser.parse_args() - - use_ema = not args.no_use_ema - - print(f"loading checkpoints to {args.checkpoint_load_device}") - - checkpoint_map_location = torch.device(args.checkpoint_load_device) - - # vqvae_model - - print(f"loading vqvae, config: {args.vqvae_original_config_file}, checkpoint: {args.vqvae_checkpoint_path}") - - vqvae_original_config = read_config_file(args.vqvae_original_config_file).model - vqvae_checkpoint = torch.load(args.vqvae_checkpoint_path, map_location=checkpoint_map_location)["model"] - - with init_empty_weights(): - vqvae_model = vqvae_model_from_original_config(vqvae_original_config) - - vqvae_diffusers_checkpoint = vqvae_original_checkpoint_to_diffusers_checkpoint(vqvae_model, vqvae_checkpoint) - - with tempfile.NamedTemporaryFile() as vqvae_diffusers_checkpoint_file: - torch.save(vqvae_diffusers_checkpoint, vqvae_diffusers_checkpoint_file.name) - del vqvae_diffusers_checkpoint - del vqvae_checkpoint - load_checkpoint_and_dispatch(vqvae_model, vqvae_diffusers_checkpoint_file.name, device_map="auto") - - print("done loading vqvae") - - # done vqvae_model - - # transformer_model - - print( - f"loading transformer, config: {args.original_config_file}, checkpoint: {args.checkpoint_path}, use ema:" - f" {use_ema}" - ) - - original_config = read_config_file(args.original_config_file).model - - diffusion_config = original_config.params.diffusion_config - transformer_config = original_config.params.diffusion_config.params.transformer_config - content_embedding_config = original_config.params.diffusion_config.params.content_emb_config - - pre_checkpoint = torch.load(args.checkpoint_path, map_location=checkpoint_map_location) - - if use_ema: - if "ema" in pre_checkpoint: - checkpoint = {} - for k, v in pre_checkpoint["model"].items(): - checkpoint[k] = v - - for k, v in pre_checkpoint["ema"].items(): - # The ema weights are only used on the transformer. To mimic their key as if they came - # from the state_dict for the top level model, we prefix with an additional "transformer." - # See the source linked in the args.use_ema config for more information. - checkpoint[f"transformer.{k}"] = v - else: - print("attempted to load ema weights but no ema weights are specified in the loaded checkpoint.") - checkpoint = pre_checkpoint["model"] - else: - checkpoint = pre_checkpoint["model"] - - del pre_checkpoint - - with init_empty_weights(): - transformer_model = transformer_model_from_original_config( - diffusion_config, transformer_config, content_embedding_config - ) - - diffusers_transformer_checkpoint = transformer_original_checkpoint_to_diffusers_checkpoint( - transformer_model, checkpoint - ) - - # classifier free sampling embeddings interlude - - # The learned embeddings are stored on the transformer in the original VQ-diffusion. We store them on a separate - # model, so we pull them off the checkpoint before the checkpoint is deleted. - - learnable_classifier_free_sampling_embeddings = diffusion_config.params.learnable_cf - - if learnable_classifier_free_sampling_embeddings: - learned_classifier_free_sampling_embeddings_embeddings = checkpoint["transformer.empty_text_embed"] - else: - learned_classifier_free_sampling_embeddings_embeddings = None - - # done classifier free sampling embeddings interlude - - with tempfile.NamedTemporaryFile() as diffusers_transformer_checkpoint_file: - torch.save(diffusers_transformer_checkpoint, diffusers_transformer_checkpoint_file.name) - del diffusers_transformer_checkpoint - del checkpoint - load_checkpoint_and_dispatch(transformer_model, diffusers_transformer_checkpoint_file.name, device_map="auto") - - print("done loading transformer") - - # done transformer_model - - # text encoder - - print("loading CLIP text encoder") - - clip_name = "openai/clip-vit-base-patch32" - - # The original VQ-Diffusion specifies the pad value by the int used in the - # returned tokens. Each model uses `0` as the pad value. The transformers clip api - # specifies the pad value via the token before it has been tokenized. The `!` pad - # token is the same as padding with the `0` pad value. - pad_token = "!" - - tokenizer_model = CLIPTokenizer.from_pretrained(clip_name, pad_token=pad_token, device_map="auto") - - assert tokenizer_model.convert_tokens_to_ids(pad_token) == 0 - - text_encoder_model = CLIPTextModel.from_pretrained( - clip_name, - # `CLIPTextModel` does not support device_map="auto" - # device_map="auto" - ) - - print("done loading CLIP text encoder") - - # done text encoder - - # scheduler - - scheduler_model = VQDiffusionScheduler( - # the scheduler has the same number of embeddings as the transformer - num_vec_classes=transformer_model.num_vector_embeds - ) - - # done scheduler - - # learned classifier free sampling embeddings - - with init_empty_weights(): - learned_classifier_free_sampling_embeddings_model = LearnedClassifierFreeSamplingEmbeddings( - learnable_classifier_free_sampling_embeddings, - hidden_size=text_encoder_model.config.hidden_size, - length=tokenizer_model.model_max_length, - ) - - learned_classifier_free_sampling_checkpoint = { - "embeddings": learned_classifier_free_sampling_embeddings_embeddings.float() - } - - with tempfile.NamedTemporaryFile() as learned_classifier_free_sampling_checkpoint_file: - torch.save(learned_classifier_free_sampling_checkpoint, learned_classifier_free_sampling_checkpoint_file.name) - del learned_classifier_free_sampling_checkpoint - del learned_classifier_free_sampling_embeddings_embeddings - load_checkpoint_and_dispatch( - learned_classifier_free_sampling_embeddings_model, - learned_classifier_free_sampling_checkpoint_file.name, - device_map="auto", - ) - - # done learned classifier free sampling embeddings - - print(f"saving VQ diffusion model, path: {args.dump_path}") - - pipe = VQDiffusionPipeline( - vqvae=vqvae_model, - transformer=transformer_model, - tokenizer=tokenizer_model, - text_encoder=text_encoder_model, - learned_classifier_free_sampling_embeddings=learned_classifier_free_sampling_embeddings_model, - scheduler=scheduler_model, - ) - pipe.save_pretrained(args.dump_path) - - print("done writing VQ diffusion model") diff --git a/spaces/Andy1621/uniformer_image_detection/configs/cornernet/README.md b/spaces/Andy1621/uniformer_image_detection/configs/cornernet/README.md deleted file mode 100644 index 51e5e7a5b815e6c08ea4f9fa46800b18eebf42c3..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/cornernet/README.md +++ /dev/null @@ -1,33 +0,0 @@ -# CornerNet - -## Introduction - -[ALGORITHM] - -```latex -@inproceedings{law2018cornernet, - title={Cornernet: Detecting objects as paired keypoints}, - author={Law, Hei and Deng, Jia}, - booktitle={15th European Conference on Computer Vision, ECCV 2018}, - pages={765--781}, - year={2018}, - organization={Springer Verlag} -} -``` - -## Results and models - -| Backbone | Batch Size | Step/Total Epochs | Mem (GB) | Inf time (fps) | box AP | Config | Download | -| :-------------: | :--------: |:----------------: | :------: | :------------: | :----: | :------: | :--------: | -| HourglassNet-104 | [10 x 5](./cornernet_hourglass104_mstest_10x5_210e_coco.py) | 180/210 | 13.9 | 4.2 | 41.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/cornernet/cornernet_hourglass104_mstest_10x5_210e_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_10x5_210e_coco/cornernet_hourglass104_mstest_10x5_210e_coco_20200824_185720-5fefbf1c.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_10x5_210e_coco/cornernet_hourglass104_mstest_10x5_210e_coco_20200824_185720.log.json) | -| HourglassNet-104 | [8 x 6](./cornernet_hourglass104_mstest_8x6_210e_coco.py) | 180/210 | 15.9 | 4.2 | 41.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/cornernet/cornernet_hourglass104_mstest_8x6_210e_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_8x6_210e_coco/cornernet_hourglass104_mstest_8x6_210e_coco_20200825_150618-79b44c30.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_8x6_210e_coco/cornernet_hourglass104_mstest_8x6_210e_coco_20200825_150618.log.json) | -| HourglassNet-104 | [32 x 3](./cornernet_hourglass104_mstest_32x3_210e_coco.py) | 180/210 | 9.5 | 3.9 | 40.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/cornernet/cornernet_hourglass104_mstest_32x3_210e_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_32x3_210e_coco/cornernet_hourglass104_mstest_32x3_210e_coco_20200819_203110-1efaea91.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_32x3_210e_coco/cornernet_hourglass104_mstest_32x3_210e_coco_20200819_203110.log.json) | - -Note: - -- TTA setting is single-scale and `flip=True`. -- Experiments with `images_per_gpu=6` are conducted on Tesla V100-SXM2-32GB, `images_per_gpu=3` are conducted on GeForce GTX 1080 Ti. -- Here are the descriptions of each experiment setting: - - 10 x 5: 10 GPUs with 5 images per gpu. This is the same setting as that reported in the original paper. - - 8 x 6: 8 GPUs with 6 images per gpu. The total batchsize is similar to paper and only need 1 node to train. - - 32 x 3: 32 GPUs with 3 images per gpu. The default setting for 1080TI and need 4 nodes to train. diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/tutorial_dataset.py b/spaces/Anonymous-sub/Rerender/ControlNet/tutorial_dataset.py deleted file mode 100644 index fb327f981d10cf94e6a7f55f5b2b4497d3e7a9cb..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/tutorial_dataset.py +++ /dev/null @@ -1,39 +0,0 @@ -import json -import cv2 -import numpy as np - -from torch.utils.data import Dataset - - -class MyDataset(Dataset): - def __init__(self): - self.data = [] - with open('./training/fill50k/prompt.json', 'rt') as f: - for line in f: - self.data.append(json.loads(line)) - - def __len__(self): - return len(self.data) - - def __getitem__(self, idx): - item = self.data[idx] - - source_filename = item['source'] - target_filename = item['target'] - prompt = item['prompt'] - - source = cv2.imread('./training/fill50k/' + source_filename) - target = cv2.imread('./training/fill50k/' + target_filename) - - # Do not forget that OpenCV read images in BGR order. - source = cv2.cvtColor(source, cv2.COLOR_BGR2RGB) - target = cv2.cvtColor(target, cv2.COLOR_BGR2RGB) - - # Normalize source images to [0, 1]. - source = source.astype(np.float32) / 255.0 - - # Normalize target images to [-1, 1]. - target = (target.astype(np.float32) / 127.5) - 1.0 - - return dict(jpg=target, txt=prompt, hint=source) - diff --git a/spaces/Apex-X/nono/roop/capturer.py b/spaces/Apex-X/nono/roop/capturer.py deleted file mode 100644 index 515fc8e54a9a3709ceee4c340f33e0b907416073..0000000000000000000000000000000000000000 --- a/spaces/Apex-X/nono/roop/capturer.py +++ /dev/null @@ -1,22 +0,0 @@ -from typing import Optional -import cv2 - -from roop.typing import Frame - - -def get_video_frame(video_path: str, frame_number: int = 0) -> Optional[Frame]: - capture = cv2.VideoCapture(video_path) - frame_total = capture.get(cv2.CAP_PROP_FRAME_COUNT) - capture.set(cv2.CAP_PROP_POS_FRAMES, min(frame_total, frame_number - 1)) - has_frame, frame = capture.read() - capture.release() - if has_frame: - return frame - return None - - -def get_video_frame_total(video_path: str) -> int: - capture = cv2.VideoCapture(video_path) - video_frame_total = int(capture.get(cv2.CAP_PROP_FRAME_COUNT)) - capture.release() - return video_frame_total diff --git a/spaces/Ashish17/Ashish_Open_Chat_AI_17/README.md b/spaces/Ashish17/Ashish_Open_Chat_AI_17/README.md deleted file mode 100644 index bd116b5a1b99491ef33d9f9fa30a230708825278..0000000000000000000000000000000000000000 --- a/spaces/Ashish17/Ashish_Open_Chat_AI_17/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Ashish Open Chat AI 17 -emoji: 📚 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/__init__.py deleted file mode 100644 index 028dcfa0fc4b3a07307989c40389b2042ceafc03..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/__init__.py +++ /dev/null @@ -1,25 +0,0 @@ -"""distutils.command - -Package containing implementation of all the standard Distutils -commands.""" - -__all__ = [ # noqa: F822 - 'build', - 'build_py', - 'build_ext', - 'build_clib', - 'build_scripts', - 'clean', - 'install', - 'install_lib', - 'install_headers', - 'install_scripts', - 'install_data', - 'sdist', - 'register', - 'bdist', - 'bdist_dumb', - 'bdist_rpm', - 'check', - 'upload', -] diff --git a/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/__init__.py b/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/__init__.py deleted file mode 100644 index e9f728f2f273be5d5fdbec6c6cc41d737176a8c0..0000000000000000000000000000000000000000 --- a/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/__init__.py +++ /dev/null @@ -1,25 +0,0 @@ -from .factory import ( - list_models, - create_model, - create_model_and_transforms, - add_model_config, -) -from .loss import ClipLoss, gather_features, LPLoss, lp_gather_features, LPMetrics -from .model import ( - CLAP, - CLAPTextCfg, - CLAPVisionCfg, - CLAPAudioCfp, - convert_weights_to_fp16, - trace_model, -) -from .openai import load_openai_model, list_openai_models -from .pretrained import ( - list_pretrained, - list_pretrained_tag_models, - list_pretrained_model_tags, - get_pretrained_url, - download_pretrained, -) -from .tokenizer import SimpleTokenizer, tokenize -from .transform import image_transform diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/transforms/custom_augmentation_impl.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/transforms/custom_augmentation_impl.py deleted file mode 100644 index 5a69e178a5ac67f69c2eeab667b9c0740a862eee..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/transforms/custom_augmentation_impl.py +++ /dev/null @@ -1,63 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -# Modified by Xingyi Zhou -""" -Implement many useful :class:`Augmentation`. -""" -import numpy as np -import sys -from fvcore.transforms.transform import ( - BlendTransform, - CropTransform, - HFlipTransform, - NoOpTransform, - Transform, - VFlipTransform, -) -from PIL import Image - -from detectron2.data.transforms.augmentation import Augmentation -from .custom_transform import EfficientDetResizeCropTransform - -__all__ = [ - "EfficientDetResizeCrop", -] - - -class EfficientDetResizeCrop(Augmentation): - """ - Scale the shorter edge to the given size, with a limit of `max_size` on the longer edge. - If `max_size` is reached, then downscale so that the longer edge does not exceed max_size. - """ - - def __init__( - self, size, scale, interp=Image.BILINEAR - ): - """ - Args: - """ - super().__init__() - self.target_size = (size, size) - self.scale = scale - self.interp = interp - - def get_transform(self, img): - # Select a random scale factor. - scale_factor = np.random.uniform(*self.scale) - scaled_target_height = scale_factor * self.target_size[0] - scaled_target_width = scale_factor * self.target_size[1] - # Recompute the accurate scale_factor using rounded scaled image size. - width, height = img.shape[1], img.shape[0] - img_scale_y = scaled_target_height / height - img_scale_x = scaled_target_width / width - img_scale = min(img_scale_y, img_scale_x) - - # Select non-zero random offset (x, y) if scaled image is larger than target size - scaled_h = int(height * img_scale) - scaled_w = int(width * img_scale) - offset_y = scaled_h - self.target_size[0] - offset_x = scaled_w - self.target_size[1] - offset_y = int(max(0.0, float(offset_y)) * np.random.uniform(0, 1)) - offset_x = int(max(0.0, float(offset_x)) * np.random.uniform(0, 1)) - return EfficientDetResizeCropTransform( - scaled_h, scaled_w, offset_y, offset_x, img_scale, self.target_size, self.interp) diff --git a/spaces/AzinZ/vitscn/text/__init__.py b/spaces/AzinZ/vitscn/text/__init__.py deleted file mode 100644 index 4ac41f9025755d8ffd74068af14c6cfc8e5a4173..0000000000000000000000000000000000000000 --- a/spaces/AzinZ/vitscn/text/__init__.py +++ /dev/null @@ -1,54 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners -from text.symbols import symbols - - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - - -def text_to_sequence(text, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def cleaned_text_to_sequence(cleaned_text): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [_symbol_to_id[symbol] for symbol in cleaned_text] - return sequence - - -def sequence_to_text(sequence): - '''Converts a sequence of IDs back to a string''' - result = '' - for symbol_id in sequence: - s = _id_to_symbol[symbol_id] - result += s - return result - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/Banbri/zcvzcv/src/app/interface/maintenance/index.tsx b/spaces/Banbri/zcvzcv/src/app/interface/maintenance/index.tsx deleted file mode 100644 index 459bc1c5b6cdde1641919751a5e202706970f4a9..0000000000000000000000000000000000000000 --- a/spaces/Banbri/zcvzcv/src/app/interface/maintenance/index.tsx +++ /dev/null @@ -1,20 +0,0 @@ -import { fonts } from "@/lib/fonts" -import { cn } from "@/lib/utils" - -export function Maintenance() { - return ( -
    -
    -

    🚧 Maintenance in progress 🚧

    -

    See the announcement here

    -

    This shouldn't last long, so stay tuned!

    -
    -
    - ) -} \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Aparcamiento De Coches Multijugador Apk Skachat.md b/spaces/Benson/text-generation/Examples/Aparcamiento De Coches Multijugador Apk Skachat.md deleted file mode 100644 index 5450cc08f3954ef3fc1b00c9f652b953afee0579..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Aparcamiento De Coches Multijugador Apk Skachat.md +++ /dev/null @@ -1,75 +0,0 @@ - -

    Aparcamiento de coches multijugador APK Skachat: Una guía para descargar y jugar el juego en su PC

    -

    Si usted está buscando un juego de simulación de estacionamiento de coches realista y divertido, es posible que desee probar Parking Multijugador. Este juego es desarrollado por olzhass y tiene más de 100 millones de descargas en Google Play Store. Pero ¿qué pasa si desea jugar en su PC en lugar de su dispositivo móvil? En este artículo, le mostraremos cómo descargar e instalar Aparcamiento de coches multijugador APK Skachat en su PC utilizando dos emuladores populares de Android: BlueStacks y NoxPlayer. También te daremos algunos consejos sobre cómo jugar el juego en tu PC y disfrutar de sus características.

    -

    ¿Qué es el Aparcamiento Multijugador?

    -

    Car Parking Multiplayer es un juego de simulación que te permite experimentar la emoción de aparcar varios coches en diferentes escenarios. Puede elegir entre más de 100 coches con interiores reales y personalizarlos con afinación, vinilos y partes del cuerpo. También puede explorar un mundo abierto con estaciones de servicio y servicios de automóviles reales, competir contra otros jugadores en carreras multijugador, intercambiar coches con otros jugadores, chatear con amigos e incluso jugar roles como oficial de policía.

    -

    aparcamiento de coches multijugador apk skachat


    DOWNLOAD » https://bltlly.com/2v6L8k



    -

    Características del juego

    -

    Algunas de las características de Aparcamiento multijugador son:

    - -

    Requisitos y compatibilidad

    - -

    Para jugar Car Parking Multijugador en su PC, es necesario tener un equipo con Windows o Mac con al menos 4 GB de RAM y 5 GB de espacio en disco libre. También es necesario descargar e instalar un emulador de Android como BlueStacks o NoxPlayer que puede ejecutar el juego sin problemas en su PC. Explicaremos cómo hacerlo en la siguiente sección.

    -

    Cómo descargar e instalar Aparcamiento de coches multijugador APK Skachat en su PC?

    -

    Aparcamiento de coches multijugador APK Skachat es una versión modificada del juego original que le permite descargar e instalar de forma gratuita sin restricciones. Sin embargo, ya que no está disponible en las tiendas de aplicaciones oficiales como Google Play Store o Apple App Store, debe usar una fuente de terceros para obtenerlo. Una de las fuentes más fiables es APKPure.com, donde se puede encontrar la última versión de Aparcamiento Multijugador APK Skachat junto con su información de archivos y comentarios de los usuarios.

    -

    Para descargar e instalar Aparcamiento de coches multijugador APK Skachat en su PC usando BlueStacks o NoxPlayer emulador, siga estos pasos:

    -

    Usando el emulador de BlueStacks

    -
      -
    1. Descargar e instalar el emulador BlueStacks desde su sitio web oficial[ 3 ] .
    2. -
    3. Inicie BlueStacks e inicie sesión con su cuenta de Google o cree una nueva.
    4. -
    5. Abra la aplicación del navegador en BlueStacks y vaya a APKPure.com. Buscar Aparcamiento de coches multijugador APK Skachat y descargarlo en su PC.
    6. -
    7. Busque el archivo descargado en su PC y haga clic derecho en él. Elija "Abrir con" y seleccione BlueStacks como el emulador.
    8. -
    9. Espere a que el proceso de instalación se complete y luego abra el juego desde la pantalla de inicio de BlueStacks.
    10. -
    -

    Usando el emulador de NoxPlayer

    -
      -
    1. Descargar e instalar el emulador NoxPlayer desde su sitio web oficial.
    2. -
    3. Inicie NoxPlayer e inicie sesión con su cuenta de Google o cree una nueva.
    4. - -
    5. Arrastre y suelte el archivo descargado a la ventana NoxPlayer y espere a que se complete el proceso de instalación.
    6. -
    7. Abre el juego desde la pantalla de inicio de NoxPlayer y disfruta.
    8. -
    -

    ¿Cómo se juega Aparcamiento de coches multijugador en su PC?

    -

    Una vez que haya descargado e instalado Aparcamiento Multijugador APK Skachat en su PC utilizando BlueStacks o NoxPlayer emulador, puede comenzar a jugar el juego en su PC. Aquí hay algunos consejos sobre cómo jugar el juego en su PC:

    -

    Controles y ajustes

    -

    Puedes usar el teclado y el ratón para controlar el juego en tu PC. También puedes personalizar la asignación de teclas según tus preferencias. Para ello, haga clic en el icono del teclado en la esquina inferior derecha de la pantalla del emulador y elija "Controles del juego". A continuación, puede arrastrar y soltar las teclas de los botones correspondientes en la pantalla del juego. También puede ajustar la sensibilidad, la transparencia y el tamaño de las teclas. Para guardar la configuración, haga clic en "Guardar" y luego en "Cerrar".

    -

    También puede cambiar la configuración del juego como gráficos, sonido, idioma, cámara, etc. haciendo clic en el icono de engranaje en la esquina superior derecha de la pantalla del juego. A continuación, puede elegir entre baja, media, alta o ultra calidad gráfica, habilitar o desactivar efectos de sonido y música, seleccionar su idioma preferido, cambiar entre diferentes modos de cámara, etc. Para aplicar los cambios, haga clic en "OK".

    -

    -

    Consejos y trucos

    -

    Aquí hay algunos consejos y trucos para ayudarle a jugar Car Parking Multijugador mejor en su PC:

    - -

    Conclusión

    -

    Car Parking Multiplayer es un divertido y realista juego de simulación de aparcamiento que puedes jugar en tu PC usando un emulador de Android como BlueStacks o NoxPlayer. Puede descargar e instalar Aparcamiento de coches multijugador APK Skachat de forma gratuita desde APKPure.com y disfrutar de sus características tales como modo multijugador mundo abierto, personalización del coche, alta-altográficos de calidad, etc. Esperamos que este artículo le ha ayudado a aprender a descargar y jugar Aparcamiento de coches multijugador APK Skachat en su PC. Si tiene alguna pregunta o comentario, háganoslo saber en los comentarios a continuación.

    -

    Preguntas frecuentes

    -

    Aquí hay algunas preguntas frecuentes acerca de Aparcamiento de coches multijugador APK Skachat:

    -

    ¿Es seguro para descargar aparcamiento multijugador APK Skachat?

    -

    Sí, Aparcamiento de coches multijugador APK Skachat es seguro para descargar siempre y cuando se utiliza una fuente de confianza como APKPure.com. Sin embargo, siempre debe tener cuidado al descargar e instalar cualquier archivo APK de fuentes desconocidas, ya que pueden contener malware o virus que pueden dañar su dispositivo o datos. También debe comprobar la información del archivo y las opiniones de los usuarios antes de descargar e instalar cualquier archivo APK.

    -

    ¿Cuáles son las ventajas de jugar Car Parking Multijugador en PC?

    - - -

    ¿Puedo jugar Aparcamiento de coches multijugador fuera de línea?

    -

    No, no puedes jugar Car Parking Multijugador sin conexión, ya que requiere una conexión a Internet para acceder a algunas de sus características como el modo multijugador, chat en línea, intercambio de coches, etc. Sin embargo, todavía se puede jugar el juego en una solamodo de reproductor sin conexión a Internet mediante la elección de la opción sin conexión del menú.

    -

    ¿Cómo puedo actualizar Aparcamiento de coches multijugador APK Skachat?

    -

    Para actualizar Aparcamiento Multijugador APK Skachat, es necesario descargar e instalar la última versión del archivo APK de APKPure.com o cualquier otra fuente confiable. También puedes buscar actualizaciones de la configuración del juego haciendo clic en el icono del engranaje y luego elegir "Buscar actualizaciones". Si hay una nueva versión disponible, puedes descargarla e instalarla desde allí.

    -

    ¿Cómo puedo contactar al desarrollador de Car Parking Multijugador?

    -

    Si tienes alguna pregunta, sugerencia, comentario, o problemas con respecto a Car Parking Multijugador, puede ponerse en contacto con el desarrollador del juego enviando un correo electrónico a olzhass@gmail.com o visitando su página de Facebook. También puede unirse a su servidor de discordia para chatear con otros jugadores y obtener apoyo de los moderadores.

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Apk Caso Penal Con Trampa.md b/spaces/Benson/text-generation/Examples/Apk Caso Penal Con Trampa.md deleted file mode 100644 index e5b3f34a36ab5b0d00ae1770a93c0ecde995b83a..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Apk Caso Penal Con Trampa.md +++ /dev/null @@ -1,68 +0,0 @@ - -

    Cómo descargar e instalar caso penal APK con Cheat

    -

    Si te gusta jugar juegos de detectives en tu dispositivo Android, es posible que haya oído hablar de Criminal Case. Es un popular juego de objetos ocultos donde tienes que investigar casos de asesinato, encontrar pistas, interrogar a los sospechosos y atrapar a los asesinos. Pero lo que si quieres hacer el juego más divertido y fácil? Ahí es donde Criminal Case APK con engaño entra en juego. En este artículo, te mostraremos cómo descargar e instalar esta versión modificada del juego que te da energía ilimitada, pistas, estrellas y más. También te explicaremos qué es un archivo APK, cómo instalarlo en tu dispositivo, cómo jugar a Criminal Case con trucos y cuáles son los pros y los contras de usarlo.

    -

    ¿Qué es un archivo APK y cómo instalarlo en Android

    -

    Un archivo APK es un archivo de paquete de Android que contiene todos los archivos y el código necesario para ejecutar una aplicación en su dispositivo Android. Es similar a un archivo EXE en Windows o un archivo DMG en Mac. Los archivos APK se utilizan generalmente para distribuir aplicaciones que no están disponibles en Google Play Store, o para actualizar aplicaciones antes de su lanzamiento oficial. También puedes usar archivos APK para instalar versiones modificadas o hackeadas de aplicaciones que ofrecen características o beneficios adicionales.

    -

    apk caso penal con trampa


    Downloadhttps://bltlly.com/2v6LMp



    -

    Para instalar un archivo APK en tu dispositivo Android, necesitas hacer dos cosas. Primero, necesitas habilitar fuentes desconocidas en la configuración de tu dispositivo. Esto le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store. Para hacer esto, ve a Configuración > Aplicaciones > Menú > Acceso especial > Instalar aplicaciones desconocidas. Luego, selecciona la aplicación de tu navegador (como Chrome) y activa la opción Permitir desde esta fuente.

    - -

    ¿Qué es el caso penal APK con Cheat

    -

    Caso Penal APK con trampa es una versión modificada de Caso Penal que le da acceso a recursos ilimitados y características que pueden ayudarle a resolver los casos más rápido y más fácil. Algunas de las características incluyen:

    - -

    Con estas características, usted puede tener más diversión y emoción jugando Criminal Case. También puedes ahorrar tiempo y dinero al no tener que comprar energía o pistas con dinero real.

    -

    Cómo descargar caso penal APK con Cheat

    -

    Para descargar Criminal Case APK con cheat, es necesario seguir estos pasos:

    1. Ir a un sitio web que ofrece APK Criminal Case con cheat. Puede utilizar la aplicación de su navegador para buscar estos sitios web, o puede utilizar uno de los siguientes enlaces:

    - - -Sitio web -URL - - -Filehippo -Descargar caso penal APK 2.39 para Android - Filehippo.com - - -APKCombo -Criminal Case APK (Android Game) - Descarga gratuita - APKCombo - - -

    2. Elija la versión de Criminal Case APK con truco que desea descargar. Asegúrese de que es compatible con su dispositivo y tiene las características que desea.

    - -

    4. Una vez descargado el archivo, búsquelo en su dispositivo usando la aplicación del navegador o una aplicación de administrador de archivos. Toque el archivo para instalarlo. Es posible que necesite aceptar algunas ventanas emergentes o permisos antes de instalar el archivo.

    -

    -

    5. Después de que la instalación se haya completado, puede iniciar el juego desde el cajón de la aplicación o la pantalla de inicio. ¡Disfrute jugando Criminal Case con trucos!

    -

    Cómo Jugar Caso Criminal con Cheat

    -

    Jugar a Criminal Case con trucos es similar a jugar la versión original del juego, excepto que tienes acceso a recursos ilimitados y características que pueden hacer el juego más fácil y más divertido. Aquí hay algunos consejos y trucos sobre cómo jugar Criminal Case con cheat:

    - - -

    Pros y contras de usar APK caso penal con trampa

    -

    El uso de APK Caso Penal con trampa tiene sus pros y sus contras. Aquí están algunos de ellos:

    - | Pros | Contras | | -- | -- - | | Usted puede tener más diversión y emoción jugando Caso Criminal | Usted puede perder algo del desafío y emoción de jugar Caso Criminal | | | Usted puede ahorrar tiempo y dinero por no tener que comprar energía o pistas con dinero real | Usted puede encontrar algunos errores o errores que pueden afectar su rendimiento del juego | | Puede probar diferentes características y opciones que no están disponibles en la versión original del juego | Puede violar algunos términos y condiciones del desarrollador del juego o Google Play Store | | Puede compartir sus logros y progresos con sus amigos y otros jugadores | Usted puede correr el riesgo de perder sus datos de juego o cuenta si desinstalar o actualizar el juego |

    Usted debe sopesar estos pros y contras antes de decidir si desea utilizar Criminal Case APK con trampa o no. En última instancia, depende de su preferencia personal y estilo de juego.

    -

    Conclusión

    -

    Criminal Case es un divertido y adictivo juego de objetos ocultos que te permite jugar como detective y resolver casos de asesinato. Pero si quieres hacer el juego más divertido y fácil, se puede tratar de usar Caso Penal APK con trampa. Esta es una versión modificada del juego que te da energía ilimitada, pistas, estrellas y más. Puede descargar e instalar esta versión desde un sitio web de buena reputación y disfrutar jugando Criminal Case con trampa.

    -

    Preguntas frecuentes

    -

    Aquí hay algunas preguntas y respuestas frecuentes sobre Caso Penal APK con trampa:

    -
      - -
    1. ¿Es legal usar Caso Penal APK con trampa?
      El uso de APK Caso Penal con trampa puede no ser legal en algunos países o regiones, ya que puede violar algunos términos y condiciones del desarrollador del juego o Google Play Store. Usted debe comprobar las leyes y reglamentos de su ubicación antes de usar Caso Penal APK con trampa. También debe respetar los derechos e intereses del desarrollador del juego y otros jugadores, y no utilizar Criminal Case APK con engaño para cualquier propósito malicioso o fraudulento.
    2. -
    3. Se Caso Penal APK con trucos de trabajo en mi dispositivo?
      Caso Penal APK con trampa debe funcionar en la mayoría de los dispositivos Android que soportan la versión original de Caso Penal. Sin embargo, algunos dispositivos pueden no ser compatibles con Criminal Case APK con trampa, o pueden experimentar algunos problemas o errores al usarlo. Usted debe comprobar la compatibilidad y los requisitos de Caso Penal APK con tramposo antes de descargar e instalar en su dispositivo. También debe actualizar el software y la configuración del dispositivo para garantizar un rendimiento óptimo.
    4. -
    5. ¿Puedo jugar APK Caso Penal con tramposo en línea o fuera de línea?
      Puedes jugar APK Caso Penal con tramposo tanto en línea como fuera de línea. Sin embargo, algunas características y funciones pueden requerir una conexión a Internet para funcionar correctamente, como sincronizar los datos y la cuenta del juego, acceder a nuevos casos y actualizaciones o interactuar con otros jugadores. Usted debe asegurarse de que tiene una conexión a Internet estable y segura al jugar Caso Penal APK con trampa en línea.
    6. - -
    -

    Espero que este artículo le ha ayudado a aprender más acerca de Caso Penal APK con trampa y cómo descargar e instalar en su dispositivo Android. Si tiene alguna pregunta o comentario, por favor deje un comentario abajo. ¡Gracias por leer!

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/BilalSardar/Remove_Text_for_Image/README.md b/spaces/BilalSardar/Remove_Text_for_Image/README.md deleted file mode 100644 index 7bbdbd7a110c5977c0f87419618277a201812c57..0000000000000000000000000000000000000000 --- a/spaces/BilalSardar/Remove_Text_for_Image/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Remove Text For Image -emoji: 👀 -colorFrom: gray -colorTo: yellow -sdk: gradio -sdk_version: 3.47.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/CVPR/LIVE/pydiffvg_tensorflow/render_tensorflow.py b/spaces/CVPR/LIVE/pydiffvg_tensorflow/render_tensorflow.py deleted file mode 100644 index 3a7efaa3fddef32fc2619c3fcaa88881354a7e9f..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pydiffvg_tensorflow/render_tensorflow.py +++ /dev/null @@ -1,664 +0,0 @@ -import os -import tensorflow as tf -import diffvg -import pydiffvg_tensorflow as pydiffvg -import time -from enum import IntEnum -import warnings - -print_timing = False -__EMPTY_TENSOR = tf.constant([]) - -def is_empty_tensor(tensor): - return tf.equal(tf.size(tensor), 0) - -def set_print_timing(val): - global print_timing - print_timing=val - -class OutputType(IntEnum): - color = 1 - sdf = 2 - -class ShapeType: - __shapetypes = [ - diffvg.ShapeType.circle, - diffvg.ShapeType.ellipse, - diffvg.ShapeType.path, - diffvg.ShapeType.rect - ] - - @staticmethod - def asTensor(type): - for i in range(len(ShapeType.__shapetypes)): - if ShapeType.__shapetypes[i] == type: - return tf.constant(i) - - @staticmethod - def asShapeType(index: tf.Tensor): - if is_empty_tensor(index): - return None - try: - type = ShapeType.__shapetypes[index] - except IndexError: - print(f'{index} is out of range: [0, {len(ShapeType.__shapetypes)})') - import sys - sys.exit() - else: - return type - -class ColorType: - __colortypes = [ - diffvg.ColorType.constant, - diffvg.ColorType.linear_gradient, - diffvg.ColorType.radial_gradient - ] - - @staticmethod - def asTensor(type): - for i in range(len(ColorType.__colortypes)): - if ColorType.__colortypes[i] == type: - return tf.constant(i) - - @staticmethod - def asColorType(index: tf.Tensor): - if is_empty_tensor(index): - return None - try: - type = ColorType.__colortypes[index] - except IndexError: - print(f'{index} is out of range: [0, {len(ColorType.__colortypes)})') - import sys - sys.exit() - else: - return type - -class FilterType: - __filtertypes = [ - diffvg.FilterType.box, - diffvg.FilterType.tent, - diffvg.FilterType.hann - ] - - @staticmethod - def asTensor(type): - for i in range(len(FilterType.__filtertypes)): - if FilterType.__filtertypes[i] == type: - return tf.constant(i) - - @staticmethod - def asFilterType(index: tf.Tensor): - if is_empty_tensor(index): - return None - try: - type = FilterType.__filtertypes[index] - except IndexError: - print(f'{index} is out of range: [0, {len(FilterType.__filtertypes)})') - import sys - sys.exit() - else: - return type - -def serialize_scene(canvas_width, - canvas_height, - shapes, - shape_groups, - filter = pydiffvg.PixelFilter(type = diffvg.FilterType.box, - radius = tf.constant(0.5)), - output_type = OutputType.color, - use_prefiltering = False): - """ - Given a list of shapes, convert them to a linear list of argument, - so that we can use it in TF. - """ - with tf.device('/device:cpu:' + str(pydiffvg.get_cpu_device_id())): - num_shapes = len(shapes) - num_shape_groups = len(shape_groups) - args = [] - args.append(tf.constant(canvas_width)) - args.append(tf.constant(canvas_height)) - args.append(tf.constant(num_shapes)) - args.append(tf.constant(num_shape_groups)) - args.append(tf.constant(output_type)) - args.append(tf.constant(use_prefiltering)) - for shape in shapes: - if isinstance(shape, pydiffvg.Circle): - args.append(ShapeType.asTensor(diffvg.ShapeType.circle)) - args.append(tf.identity(shape.radius)) - args.append(tf.identity(shape.center)) - elif isinstance(shape, pydiffvg.Ellipse): - args.append(ShapeType.asTensor(diffvg.ShapeType.ellipse)) - args.append(tf.identity(shape.radius)) - args.append(tf.identity(shape.center)) - elif isinstance(shape, pydiffvg.Path): - assert(shape.points.shape[1] == 2) - args.append(ShapeType.asTensor(diffvg.ShapeType.path)) - args.append(tf.identity(shape.num_control_points)) - args.append(tf.identity(shape.points)) - args.append(tf.constant(shape.is_closed)) - args.append(tf.constant(shape.use_distance_approx)) - elif isinstance(shape, pydiffvg.Polygon): - assert(shape.points.shape[1] == 2) - args.append(ShapeType.asTensor(diffvg.ShapeType.path)) - if shape.is_closed: - args.append(tf.zeros(shape.points.shape[0], dtype = tf.int32)) - else: - args.append(tf.zeros(shape.points.shape[0] - 1, dtype = tf.int32)) - args.append(tf.identity(shape.points)) - args.append(tf.constant(shape.is_closed)) - elif isinstance(shape, pydiffvg.Rect): - args.append(ShapeType.asTensor(diffvg.ShapeType.rect)) - args.append(tf.identity(shape.p_min)) - args.append(tf.identity(shape.p_max)) - else: - assert(False) - args.append(tf.identity(shape.stroke_width)) - - for shape_group in shape_groups: - args.append(tf.identity(shape_group.shape_ids)) - # Fill color - if shape_group.fill_color is None: - args.append(__EMPTY_TENSOR) - elif tf.is_tensor(shape_group.fill_color): - args.append(ColorType.asTensor(diffvg.ColorType.constant)) - args.append(tf.identity(shape_group.fill_color)) - elif isinstance(shape_group.fill_color, pydiffvg.LinearGradient): - args.append(ColorType.asTensor(diffvg.ColorType.linear_gradient)) - args.append(tf.identity(shape_group.fill_color.begin)) - args.append(tf.identity(shape_group.fill_color.end)) - args.append(tf.identity(shape_group.fill_color.offsets)) - args.append(tf.identity(shape_group.fill_color.stop_colors)) - elif isinstance(shape_group.fill_color, pydiffvg.RadialGradient): - args.append(ColorType.asTensor(diffvg.ColorType.radial_gradient)) - args.append(tf.identity(shape_group.fill_color.center)) - args.append(tf.identity(shape_group.fill_color.radius)) - args.append(tf.identity(shape_group.fill_color.offsets)) - args.append(tf.identity(shape_group.fill_color.stop_colors)) - - if shape_group.fill_color is not None: - # go through the underlying shapes and check if they are all closed - for shape_id in shape_group.shape_ids: - if isinstance(shapes[shape_id], pydiffvg.Path): - if not shapes[shape_id].is_closed: - warnings.warn("Detected non-closed paths with fill color. This might causes unexpected results.", Warning) - - # Stroke color - if shape_group.stroke_color is None: - args.append(__EMPTY_TENSOR) - elif tf.is_tensor(shape_group.stroke_color): - args.append(tf.constant(0)) - args.append(tf.identity(shape_group.stroke_color)) - elif isinstance(shape_group.stroke_color, pydiffvg.LinearGradient): - args.append(ColorType.asTensor(diffvg.ColorType.linear_gradient)) - args.append(tf.identity(shape_group.stroke_color.begin)) - args.append(tf.identity(shape_group.stroke_color.end)) - args.append(tf.identity(shape_group.stroke_color.offsets)) - args.append(tf.identity(shape_group.stroke_color.stop_colors)) - elif isinstance(shape_group.stroke_color, pydiffvg.RadialGradient): - args.append(ColorType.asTensor(diffvg.ColorType.radial_gradient)) - args.append(tf.identity(shape_group.stroke_color.center)) - args.append(tf.identity(shape_group.stroke_color.radius)) - args.append(tf.identity(shape_group.stroke_color.offsets)) - args.append(tf.identity(shape_group.stroke_color.stop_colors)) - args.append(tf.constant(shape_group.use_even_odd_rule)) - # Transformation - args.append(tf.identity(shape_group.shape_to_canvas)) - args.append(FilterType.asTensor(filter.type)) - args.append(tf.constant(filter.radius)) - return args - -class Context: pass - -def forward(width, - height, - num_samples_x, - num_samples_y, - seed, - *args): - """ - Forward rendering pass: given a serialized scene and output an image. - """ - # Unpack arguments - with tf.device('/device:cpu:' + str(pydiffvg.get_cpu_device_id())): - current_index = 0 - canvas_width = int(args[current_index]) - current_index += 1 - canvas_height = int(args[current_index]) - current_index += 1 - num_shapes = int(args[current_index]) - current_index += 1 - num_shape_groups = int(args[current_index]) - current_index += 1 - output_type = OutputType(int(args[current_index])) - current_index += 1 - use_prefiltering = bool(args[current_index]) - current_index += 1 - shapes = [] - shape_groups = [] - shape_contents = [] # Important to avoid GC deleting the shapes - color_contents = [] # Same as above - for shape_id in range(num_shapes): - shape_type = ShapeType.asShapeType(args[current_index]) - current_index += 1 - if shape_type == diffvg.ShapeType.circle: - radius = args[current_index] - current_index += 1 - center = args[current_index] - current_index += 1 - shape = diffvg.Circle(float(radius), - diffvg.Vector2f(float(center[0]), float(center[1]))) - elif shape_type == diffvg.ShapeType.ellipse: - radius = args[current_index] - current_index += 1 - center = args[current_index] - current_index += 1 - shape = diffvg.Ellipse(diffvg.Vector2f(float(radius[0]), float(radius[1])), - diffvg.Vector2f(float(center[0]), float(center[1]))) - elif shape_type == diffvg.ShapeType.path: - num_control_points = args[current_index] - current_index += 1 - points = args[current_index] - current_index += 1 - is_closed = args[current_index] - current_index += 1 - use_distance_approx = args[current_index] - current_index += 1 - shape = diffvg.Path(diffvg.int_ptr(pydiffvg.data_ptr(num_control_points)), - diffvg.float_ptr(pydiffvg.data_ptr(points)), - diffvg.float_ptr(0), # thickness - num_control_points.shape[0], - points.shape[0], - is_closed, - use_distance_approx) - elif shape_type == diffvg.ShapeType.rect: - p_min = args[current_index] - current_index += 1 - p_max = args[current_index] - current_index += 1 - shape = diffvg.Rect(diffvg.Vector2f(float(p_min[0]), float(p_min[1])), - diffvg.Vector2f(float(p_max[0]), float(p_max[1]))) - else: - assert(False) - stroke_width = args[current_index] - current_index += 1 - shapes.append(diffvg.Shape(\ - shape_type, shape.get_ptr(), float(stroke_width))) - shape_contents.append(shape) - - for shape_group_id in range(num_shape_groups): - shape_ids = args[current_index] - current_index += 1 - fill_color_type = ColorType.asColorType(args[current_index]) - current_index += 1 - if fill_color_type == diffvg.ColorType.constant: - color = args[current_index] - current_index += 1 - fill_color = diffvg.Constant(\ - diffvg.Vector4f(color[0], color[1], color[2], color[3])) - elif fill_color_type == diffvg.ColorType.linear_gradient: - beg = args[current_index] - current_index += 1 - end = args[current_index] - current_index += 1 - offsets = args[current_index] - current_index += 1 - stop_colors = args[current_index] - current_index += 1 - assert(offsets.shape[0] == stop_colors.shape[0]) - fill_color = diffvg.LinearGradient(diffvg.Vector2f(float(beg[0]), float(beg[1])), - diffvg.Vector2f(float(end[0]), float(end[1])), - offsets.shape[0], - diffvg.float_ptr(pydiffvg.data_ptr(offsets)), - diffvg.float_ptr(pydiffvg.data_ptr(stop_colors))) - elif fill_color_type == diffvg.ColorType.radial_gradient: - center = args[current_index] - current_index += 1 - radius = args[current_index] - current_index += 1 - offsets = args[current_index] - current_index += 1 - stop_colors = args[current_index] - current_index += 1 - assert(offsets.shape[0] == stop_colors.shape[0]) - fill_color = diffvg.RadialGradient(diffvg.Vector2f(float(center[0]), float(center[1])), - diffvg.Vector2f(float(radius[0]), float(radius[1])), - offsets.shape[0], - diffvg.float_ptr(pydiffvg.data_ptr(offsets)), - diffvg.float_ptr(pydiffvg.data_ptr(stop_colors))) - elif fill_color_type is None: - fill_color = None - else: - assert(False) - - stroke_color_type = ColorType.asColorType(args[current_index]) - current_index += 1 - if stroke_color_type == diffvg.ColorType.constant: - color = args[current_index] - current_index += 1 - stroke_color = diffvg.Constant(\ - diffvg.Vector4f(float(color[0]), - float(color[1]), - float(color[2]), - float(color[3]))) - elif stroke_color_type == diffvg.ColorType.linear_gradient: - beg = args[current_index] - current_index += 1 - end = args[current_index] - current_index += 1 - offsets = args[current_index] - current_index += 1 - stop_colors = args[current_index] - current_index += 1 - assert(offsets.shape[0] == stop_colors.shape[0]) - stroke_color = diffvg.LinearGradient(\ - diffvg.Vector2f(float(beg[0]), float(beg[1])), - diffvg.Vector2f(float(end[0]), float(end[1])), - offsets.shape[0], - diffvg.float_ptr(pydiffvg.data_ptr(offsets)), - diffvg.float_ptr(stop_colors.data_ptr())) - elif stroke_color_type == diffvg.ColorType.radial_gradient: - center = args[current_index] - current_index += 1 - radius = args[current_index] - current_index += 1 - offsets = args[current_index] - current_index += 1 - stop_colors = args[current_index] - current_index += 1 - assert(offsets.shape[0] == stop_colors.shape[0]) - stroke_color = diffvg.RadialGradient(\ - diffvg.Vector2f(float(center[0]), float(center[1])), - diffvg.Vector2f(float(radius[0]), float(radius[1])), - offsets.shape[0], - diffvg.float_ptr(pydiffvg.data_ptr(offsets)), - diffvg.float_ptr(pydiffvg.data_ptr(stop_colors))) - elif stroke_color_type is None: - stroke_color = None - else: - assert(False) - use_even_odd_rule = bool(args[current_index]) - current_index += 1 - shape_to_canvas = args[current_index] - current_index += 1 - - if fill_color is not None: - color_contents.append(fill_color) - if stroke_color is not None: - color_contents.append(stroke_color) - shape_groups.append(diffvg.ShapeGroup(\ - diffvg.int_ptr(pydiffvg.data_ptr(shape_ids)), - shape_ids.shape[0], - diffvg.ColorType.constant if fill_color_type is None else fill_color_type, - diffvg.void_ptr(0) if fill_color is None else fill_color.get_ptr(), - diffvg.ColorType.constant if stroke_color_type is None else stroke_color_type, - diffvg.void_ptr(0) if stroke_color is None else stroke_color.get_ptr(), - use_even_odd_rule, - diffvg.float_ptr(pydiffvg.data_ptr(shape_to_canvas)))) - - filter_type = FilterType.asFilterType(args[current_index]) - current_index += 1 - filter_radius = args[current_index] - current_index += 1 - filt = diffvg.Filter(filter_type, filter_radius) - - device_name = pydiffvg.get_device_name() - device_spec = tf.DeviceSpec.from_string(device_name) - use_gpu = device_spec.device_type == 'GPU' - gpu_index = device_spec.device_index if device_spec.device_index is not None else 0 - - start = time.time() - scene = diffvg.Scene(canvas_width, - canvas_height, - shapes, - shape_groups, - filt, - use_gpu, - gpu_index) - time_elapsed = time.time() - start - global print_timing - if print_timing: - print('Scene construction, time: %.5f s' % time_elapsed) - - with tf.device(device_name): - if output_type == OutputType.color: - rendered_image = tf.zeros((int(height), int(width), 4), dtype = tf.float32) - else: - assert(output_type == OutputType.sdf) - rendered_image = tf.zeros((int(height), int(width), 1), dtype = tf.float32) - - start = time.time() - diffvg.render(scene, - diffvg.float_ptr(0), # background image - diffvg.float_ptr(pydiffvg.data_ptr(rendered_image) if output_type == OutputType.color else 0), - diffvg.float_ptr(pydiffvg.data_ptr(rendered_image) if output_type == OutputType.sdf else 0), - width, - height, - int(num_samples_x), - int(num_samples_y), - seed, - diffvg.float_ptr(0), # d_background_image - diffvg.float_ptr(0), # d_render_image - diffvg.float_ptr(0), # d_render_sdf - diffvg.float_ptr(0), # d_translation - use_prefiltering, - diffvg.float_ptr(0), # eval_positions - 0 ) # num_eval_positions (automatically set to entire raster) - time_elapsed = time.time() - start - if print_timing: - print('Forward pass, time: %.5f s' % time_elapsed) - - ctx = Context() - ctx.scene = scene - ctx.shape_contents = shape_contents - ctx.color_contents = color_contents - ctx.filter = filt - ctx.width = width - ctx.height = height - ctx.num_samples_x = num_samples_x - ctx.num_samples_y = num_samples_y - ctx.seed = seed - ctx.output_type = output_type - ctx.use_prefiltering = use_prefiltering - return rendered_image, ctx - -@tf.custom_gradient -def render(*x): - """ - The main TensorFlow interface of C++ diffvg. - """ - assert(tf.executing_eagerly()) - if pydiffvg.get_use_gpu() and os.environ.get('TF_FORCE_GPU_ALLOW_GROWTH') != 'true': - print('******************** WARNING ********************') - print('Tensorflow by default allocates all GPU memory,') - print('causing huge amount of page faults when rendering.') - print('Please set the environment variable TF_FORCE_GPU_ALLOW_GROWTH to true,') - print('so that Tensorflow allocates memory on demand.') - print('*************************************************') - - width = x[0] - height = x[1] - num_samples_x = x[2] - num_samples_y = x[3] - seed = x[4] - args = x[5:] - img, ctx = forward(width, height, num_samples_x, num_samples_y, seed, *args) - - def backward(grad_img): - scene = ctx.scene - width = ctx.width - height = ctx.height - num_samples_x = ctx.num_samples_x - num_samples_y = ctx.num_samples_y - seed = ctx.seed - output_type = ctx.output_type - use_prefiltering = ctx.use_prefiltering - - start = time.time() - with tf.device(pydiffvg.get_device_name()): - diffvg.render(scene, - diffvg.float_ptr(0), # background_image - diffvg.float_ptr(0), # render_image - diffvg.float_ptr(0), # render_sdf - width, - height, - num_samples_x, - num_samples_y, - seed, - diffvg.float_ptr(0), # d_background_image - diffvg.float_ptr(pydiffvg.data_ptr(grad_img) if output_type == OutputType.color else 0), - diffvg.float_ptr(pydiffvg.data_ptr(grad_img) if output_type == OutputType.sdf else 0), - diffvg.float_ptr(0), # d_translation - use_prefiltering, - diffvg.float_ptr(0), # eval_positions - 0 ) # num_eval_positions (automatically set to entire raster)) - time_elapsed = time.time() - start - global print_timing - if print_timing: - print('Backward pass, time: %.5f s' % time_elapsed) - - with tf.device('/device:cpu:' + str(pydiffvg.get_cpu_device_id())): - d_args = [] - d_args.append(None) # width - d_args.append(None) # height - d_args.append(None) # num_samples_x - d_args.append(None) # num_samples_y - d_args.append(None) # seed - d_args.append(None) # canvas_width - d_args.append(None) # canvas_height - d_args.append(None) # num_shapes - d_args.append(None) # num_shape_groups - d_args.append(None) # output_type - d_args.append(None) # use_prefiltering - for shape_id in range(scene.num_shapes): - d_args.append(None) # type - d_shape = scene.get_d_shape(shape_id) - if d_shape.type == diffvg.ShapeType.circle: - d_circle = d_shape.as_circle() - radius = tf.constant(d_circle.radius) - d_args.append(radius) - c = d_circle.center - c = tf.constant((c.x, c.y)) - d_args.append(c) - elif d_shape.type == diffvg.ShapeType.ellipse: - d_ellipse = d_shape.as_ellipse() - r = d_ellipse.radius - r = tf.constant((d_ellipse.radius.x, d_ellipse.radius.y)) - d_args.append(r) - c = d_ellipse.center - c = tf.constant((c.x, c.y)) - d_args.append(c) - elif d_shape.type == diffvg.ShapeType.path: - d_path = d_shape.as_path() - points = tf.zeros((d_path.num_points, 2), dtype=tf.float32) - d_path.copy_to(diffvg.float_ptr(pydiffvg.data_ptr(points)),diffvg.float_ptr(0)) - d_args.append(None) # num_control_points - d_args.append(points) - d_args.append(None) # is_closed - d_args.append(None) # use_distance_approx - elif d_shape.type == diffvg.ShapeType.rect: - d_rect = d_shape.as_rect() - p_min = tf.constant((d_rect.p_min.x, d_rect.p_min.y)) - p_max = tf.constant((d_rect.p_max.x, d_rect.p_max.y)) - d_args.append(p_min) - d_args.append(p_max) - else: - assert(False) - w = tf.constant((d_shape.stroke_width)) - d_args.append(w) - - for group_id in range(scene.num_shape_groups): - d_shape_group = scene.get_d_shape_group(group_id) - d_args.append(None) # shape_ids - d_args.append(None) # fill_color_type - if d_shape_group.has_fill_color(): - if d_shape_group.fill_color_type == diffvg.ColorType.constant: - d_constant = d_shape_group.fill_color_as_constant() - c = d_constant.color - d_args.append(tf.constant((c.x, c.y, c.z, c.w))) - elif d_shape_group.fill_color_type == diffvg.ColorType.linear_gradient: - d_linear_gradient = d_shape_group.fill_color_as_linear_gradient() - beg = d_linear_gradient.begin - d_args.append(tf.constant((beg.x, beg.y))) - end = d_linear_gradient.end - d_args.append(tf.constant((end.x, end.y))) - offsets = tf.zeros((d_linear_gradient.num_stops), dtype=tf.float32) - stop_colors = tf.zeros((d_linear_gradient.num_stops, 4), dtype=tf.float32) - # HACK: tensorflow's eager mode uses a cache to store scalar - # constants to avoid memory copy. If we pass scalar tensors - # into the C++ code and modify them, we would corrupt the - # cache, causing incorrect result in future scalar constant - # creations. Thus we force tensorflow to copy by plusing a zero. - # (also see https://github.com/tensorflow/tensorflow/issues/11186 - # for more discussion regarding copying tensors) - if offsets.shape.num_elements() == 1: - offsets = offsets + 0 - d_linear_gradient.copy_to(\ - diffvg.float_ptr(pydiffvg.data_ptr(offsets)), - diffvg.float_ptr(pydiffvg.data_ptr(stop_colors))) - d_args.append(offsets) - d_args.append(stop_colors) - elif d_shape_group.fill_color_type == diffvg.ColorType.radial_gradient: - d_radial_gradient = d_shape_group.fill_color_as_radial_gradient() - center = d_radial_gradient.center - d_args.append(tf.constant((center.x, center.y))) - radius = d_radial_gradient.radius - d_args.append(tf.constant((radius.x, radius.y))) - offsets = tf.zeros((d_radial_gradient.num_stops)) - if offsets.shape.num_elements() == 1: - offsets = offsets + 0 - stop_colors = tf.zeros((d_radial_gradient.num_stops, 4)) - d_radial_gradient.copy_to(\ - diffvg.float_ptr(pydiffvg.data_ptr(offsets)), - diffvg.float_ptr(pydiffvg.data_ptr(stop_colors))) - d_args.append(offsets) - d_args.append(stop_colors) - else: - assert(False) - d_args.append(None) # stroke_color_type - if d_shape_group.has_stroke_color(): - if d_shape_group.stroke_color_type == diffvg.ColorType.constant: - d_constant = d_shape_group.stroke_color_as_constant() - c = d_constant.color - d_args.append(tf.constant((c.x, c.y, c.z, c.w))) - elif d_shape_group.stroke_color_type == diffvg.ColorType.linear_gradient: - d_linear_gradient = d_shape_group.stroke_color_as_linear_gradient() - beg = d_linear_gradient.begin - d_args.append(tf.constant((beg.x, beg.y))) - end = d_linear_gradient.end - d_args.append(tf.constant((end.x, end.y))) - offsets = tf.zeros((d_linear_gradient.num_stops)) - stop_colors = tf.zeros((d_linear_gradient.num_stops, 4)) - if offsets.shape.num_elements() == 1: - offsets = offsets + 0 - d_linear_gradient.copy_to(\ - diffvg.float_ptr(pydiffvg.data_ptr(offsets)), - diffvg.float_ptr(pydiffvg.data_ptr(stop_colors))) - d_args.append(offsets) - d_args.append(stop_colors) - elif d_shape_group.fill_color_type == diffvg.ColorType.radial_gradient: - d_radial_gradient = d_shape_group.stroke_color_as_radial_gradient() - center = d_radial_gradient.center - d_args.append(tf.constant((center.x, center.y))) - radius = d_radial_gradient.radius - d_args.append(tf.constant((radius.x, radius.y))) - offsets = tf.zeros((d_radial_gradient.num_stops)) - stop_colors = tf.zeros((d_radial_gradient.num_stops, 4)) - if offsets.shape.num_elements() == 1: - offsets = offsets + 0 - d_radial_gradient.copy_to(\ - diffvg.float_ptr(pydiffvg.data_ptr(offsets)), - diffvg.float_ptr(pydiffvg.data_ptr(stop_colors))) - d_args.append(offsets) - d_args.append(stop_colors) - else: - assert(False) - d_args.append(None) # use_even_odd_rule - d_shape_to_canvas = tf.zeros((3, 3), dtype = tf.float32) - d_shape_group.copy_to(diffvg.float_ptr(pydiffvg.data_ptr(d_shape_to_canvas))) - d_args.append(d_shape_to_canvas) - d_args.append(None) # filter_type - d_args.append(tf.constant(scene.get_d_filter_radius())) - - return d_args - - return img, backward diff --git a/spaces/CVPR/LIVE/thrust/dependencies/cub/README.md b/spaces/CVPR/LIVE/thrust/dependencies/cub/README.md deleted file mode 100644 index 18ad2298fd7d10d864d64a022f17ad6743501697..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/dependencies/cub/README.md +++ /dev/null @@ -1,189 +0,0 @@ -
    -

    About CUB

    - -CUB provides state-of-the-art, reusable software components for every layer -of the CUDA programming model: -- [Device-wide primitives](https://nvlabs.github.com/cub/group___device_module.html) - - Sort, prefix scan, reduction, histogram, etc. - - Compatible with CUDA dynamic parallelism -- [Block-wide "collective" primitives](https://nvlabs.github.com/cub/group___block_module.html) - - I/O, sort, prefix scan, reduction, histogram, etc. - - Compatible with arbitrary thread block sizes and types -- [Warp-wide "collective" primitives](https://nvlabs.github.com/cub/group___warp_module.html) - - Warp-wide prefix scan, reduction, etc. - - Safe and architecture-specific -- [Thread and resource utilities](https://nvlabs.github.com/cub/group___thread_module.html) - - PTX intrinsics, device reflection, texture-caching iterators, caching memory allocators, etc. - -![Orientation of collective primitives within the CUDA software stack](http://nvlabs.github.com/cub/cub_overview.png) - -CUB is included in the NVIDIA HPC SDK and the CUDA Toolkit. - -We recommend the [CUB Project Website](http://nvlabs.github.com/cub) for further information and examples. - -

    -

    A Simple Example

    - -```C++ -#include - -// Block-sorting CUDA kernel -__global__ void BlockSortKernel(int *d_in, int *d_out) -{ - using namespace cub; - - // Specialize BlockRadixSort, BlockLoad, and BlockStore for 128 threads - // owning 16 integer items each - typedef BlockRadixSort BlockRadixSort; - typedef BlockLoad BlockLoad; - typedef BlockStore BlockStore; - - // Allocate shared memory - __shared__ union { - typename BlockRadixSort::TempStorage sort; - typename BlockLoad::TempStorage load; - typename BlockStore::TempStorage store; - } temp_storage; - - int block_offset = blockIdx.x * (128 * 16); // OffsetT for this block's ment - - // Obtain a segment of 2048 consecutive keys that are blocked across threads - int thread_keys[16]; - BlockLoad(temp_storage.load).Load(d_in + block_offset, thread_keys); - __syncthreads(); - - // Collectively sort the keys - BlockRadixSort(temp_storage.sort).Sort(thread_keys); - __syncthreads(); - - // Store the sorted segment - BlockStore(temp_storage.store).Store(d_out + block_offset, thread_keys); -} -``` - -Each thread block uses `cub::BlockRadixSort` to collectively sort -its own input segment. The class is specialized by the -data type being sorted, by the number of threads per block, by the number of -keys per thread, and implicitly by the targeted compilation architecture. - -The `cub::BlockLoad` and `cub::BlockStore` classes are similarly specialized. -Furthermore, to provide coalesced accesses to device memory, these primitives are -configured to access memory using a striped access pattern (where consecutive threads -simultaneously access consecutive items) and then transpose the keys into -a [blocked arrangement](index.html#sec4sec3) of elements across threads. - -Once specialized, these classes expose opaque `TempStorage` member types. -The thread block uses these storage types to statically allocate the union of -shared memory needed by the thread block. (Alternatively these storage types -could be aliased to global memory allocations). - -

    -

    Releases

    - -CUB is distributed with the NVIDIA HPC SDK and the CUDA Toolkit in addition -to GitHub. - -See the [changelog](CHANGELOG.md) for details about specific releases. - -| CUB Release | Included In | -| ------------------------- | --------------------------------------- | -| 1.9.10-1 | NVIDIA HPC SDK 20.7 & CUDA Toolkit 11.1 | -| 1.9.10 | NVIDIA HPC SDK 20.5 | -| 1.9.9 | CUDA Toolkit 11.0 | -| 1.9.8-1 | NVIDIA HPC SDK 20.3 | -| 1.9.8 | CUDA Toolkit 11.0 Early Access | -| 1.9.8 | CUDA 11.0 Early Access | -| 1.8.0 | | -| 1.7.5 | Thrust 1.9.2 | -| 1.7.4 | Thrust 1.9.1-2 | -| 1.7.3 | | -| 1.7.2 | | -| 1.7.1 | | -| 1.7.0 | Thrust 1.9.0-5 | -| 1.6.4 | | -| 1.6.3 | | -| 1.6.2 (previously 1.5.5) | | -| 1.6.1 (previously 1.5.4) | | -| 1.6.0 (previously 1.5.3) | | -| 1.5.2 | | -| 1.5.1 | | -| 1.5.0 | | -| 1.4.1 | | -| 1.4.0 | | -| 1.3.2 | | -| 1.3.1 | | -| 1.3.0 | | -| 1.2.3 | | -| 1.2.2 | | -| 1.2.0 | | -| 1.1.1 | | -| 1.0.2 | | -| 1.0.1 | | -| 0.9.4 | | -| 0.9.2 | | -| 0.9.1 | | -| 0.9.0 | | - -

    -

    Development Process

    - -CUB uses the [CMake build system](https://cmake.org/) to build unit tests, -examples, and header tests. To build CUB as a developer, the following -recipe should be followed: - -``` -# Clone CUB repo from github: -git clone https://github.com/thrust/cub.git -cd cub - -# Create build directory: -mkdir build -cd build - -# Configure -- use one of the following: -cmake .. # Command line interface. -ccmake .. # ncurses GUI (Linux only) -cmake-gui # Graphical UI, set source/build directories in the app - -# Build: -cmake --build . -j # invokes make (or ninja, etc) - -# Run tests and examples: -ctest -``` - -By default, the C++14 standard is targeted, but this can be changed in CMake. -More information on configuring your CUB build and creating a pull request is -found in [CONTRIBUTING.md](CONTRIBUTING.md). - -

    -

    Open Source License

    - -CUB is available under the "New BSD" open-source license: - -``` -Copyright (c) 2010-2011, Duane Merrill. All rights reserved. -Copyright (c) 2011-2018, NVIDIA CORPORATION. All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - * Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - * Neither the name of the NVIDIA CORPORATION nor the - names of its contributors may be used to endorse or promote products - derived from this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND -ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY -DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES -(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND -ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -``` diff --git a/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/minimum_category.h b/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/minimum_category.h deleted file mode 100644 index abb80d8c1048353490ab6c4ddc238af1bea76b9f..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/minimum_category.h +++ /dev/null @@ -1,52 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -namespace thrust -{ - -namespace detail -{ - -template - struct minimum_category - : minimum_type -{ -}; // end minimum_category - -} // end detail - -} // end thrust - - diff --git a/spaces/CVPR/MonoScene/monoscene/modules.py b/spaces/CVPR/MonoScene/monoscene/modules.py deleted file mode 100644 index 3e8bf875ccd6dffb51bb5acb25f0302fe0032d6c..0000000000000000000000000000000000000000 --- a/spaces/CVPR/MonoScene/monoscene/modules.py +++ /dev/null @@ -1,194 +0,0 @@ -import torch -import torch.nn as nn -from monoscene.DDR import Bottleneck3D - - -class ASPP(nn.Module): - """ - ASPP 3D - Adapt from https://github.com/cv-rits/LMSCNet/blob/main/LMSCNet/models/LMSCNet.py#L7 - """ - - def __init__(self, planes, dilations_conv_list): - super().__init__() - - # ASPP Block - self.conv_list = dilations_conv_list - self.conv1 = nn.ModuleList( - [ - nn.Conv3d( - planes, planes, kernel_size=3, padding=dil, dilation=dil, bias=False - ) - for dil in dilations_conv_list - ] - ) - self.bn1 = nn.ModuleList( - [nn.BatchNorm3d(planes) for dil in dilations_conv_list] - ) - self.conv2 = nn.ModuleList( - [ - nn.Conv3d( - planes, planes, kernel_size=3, padding=dil, dilation=dil, bias=False - ) - for dil in dilations_conv_list - ] - ) - self.bn2 = nn.ModuleList( - [nn.BatchNorm3d(planes) for dil in dilations_conv_list] - ) - self.relu = nn.ReLU() - - def forward(self, x_in): - - y = self.bn2[0](self.conv2[0](self.relu(self.bn1[0](self.conv1[0](x_in))))) - for i in range(1, len(self.conv_list)): - y += self.bn2[i](self.conv2[i](self.relu(self.bn1[i](self.conv1[i](x_in))))) - x_in = self.relu(y + x_in) # modified - - return x_in - - -class SegmentationHead(nn.Module): - """ - 3D Segmentation heads to retrieve semantic segmentation at each scale. - Formed by Dim expansion, Conv3D, ASPP block, Conv3D. - Taken from https://github.com/cv-rits/LMSCNet/blob/main/LMSCNet/models/LMSCNet.py#L7 - """ - - def __init__(self, inplanes, planes, nbr_classes, dilations_conv_list): - super().__init__() - - # First convolution - self.conv0 = nn.Conv3d(inplanes, planes, kernel_size=3, padding=1, stride=1) - - # ASPP Block - self.conv_list = dilations_conv_list - self.conv1 = nn.ModuleList( - [ - nn.Conv3d( - planes, planes, kernel_size=3, padding=dil, dilation=dil, bias=False - ) - for dil in dilations_conv_list - ] - ) - self.bn1 = nn.ModuleList( - [nn.BatchNorm3d(planes) for dil in dilations_conv_list] - ) - self.conv2 = nn.ModuleList( - [ - nn.Conv3d( - planes, planes, kernel_size=3, padding=dil, dilation=dil, bias=False - ) - for dil in dilations_conv_list - ] - ) - self.bn2 = nn.ModuleList( - [nn.BatchNorm3d(planes) for dil in dilations_conv_list] - ) - self.relu = nn.ReLU() - - self.conv_classes = nn.Conv3d( - planes, nbr_classes, kernel_size=3, padding=1, stride=1 - ) - - def forward(self, x_in): - - # Convolution to go from inplanes to planes features... - x_in = self.relu(self.conv0(x_in)) - - y = self.bn2[0](self.conv2[0](self.relu(self.bn1[0](self.conv1[0](x_in))))) - for i in range(1, len(self.conv_list)): - y += self.bn2[i](self.conv2[i](self.relu(self.bn1[i](self.conv1[i](x_in))))) - x_in = self.relu(y + x_in) # modified - - x_in = self.conv_classes(x_in) - - return x_in - - -class ProcessKitti(nn.Module): - def __init__(self, feature, norm_layer, bn_momentum, dilations=[1, 2, 3]): - super(Process, self).__init__() - self.main = nn.Sequential( - *[ - Bottleneck3D( - feature, - feature // 4, - bn_momentum=bn_momentum, - norm_layer=norm_layer, - dilation=[i, i, i], - ) - for i in dilations - ] - ) - - def forward(self, x): - return self.main(x) - - -class Process(nn.Module): - def __init__(self, feature, norm_layer, bn_momentum, dilations=[1, 2, 3]): - super(Process, self).__init__() - self.main = nn.Sequential( - *[ - Bottleneck3D( - feature, - feature // 4, - bn_momentum=bn_momentum, - norm_layer=norm_layer, - dilation=[i, i, i], - ) - for i in dilations - ] - ) - - def forward(self, x): - return self.main(x) - - -class Upsample(nn.Module): - def __init__(self, in_channels, out_channels, norm_layer, bn_momentum): - super(Upsample, self).__init__() - self.main = nn.Sequential( - nn.ConvTranspose3d( - in_channels, - out_channels, - kernel_size=3, - stride=2, - padding=1, - dilation=1, - output_padding=1, - ), - norm_layer(out_channels, momentum=bn_momentum), - nn.ReLU(), - ) - - def forward(self, x): - return self.main(x) - - -class Downsample(nn.Module): - def __init__(self, feature, norm_layer, bn_momentum, expansion=8): - super(Downsample, self).__init__() - self.main = Bottleneck3D( - feature, - feature // 4, - bn_momentum=bn_momentum, - expansion=expansion, - stride=2, - downsample=nn.Sequential( - nn.AvgPool3d(kernel_size=2, stride=2), - nn.Conv3d( - feature, - int(feature * expansion / 4), - kernel_size=1, - stride=1, - bias=False, - ), - norm_layer(int(feature * expansion / 4), momentum=bn_momentum), - ), - norm_layer=norm_layer, - ) - - def forward(self, x): - return self.main(x) diff --git a/spaces/CVPR/drawings-to-human/main.py b/spaces/CVPR/drawings-to-human/main.py deleted file mode 100644 index 4383dff4c849fe1564a48f33b271ea8771ff27b7..0000000000000000000000000000000000000000 --- a/spaces/CVPR/drawings-to-human/main.py +++ /dev/null @@ -1,3 +0,0 @@ -import subprocess - -subprocess.run(["make", "build-all"], shell=False) \ No newline at end of file diff --git a/spaces/CVPR/regionclip-demo/detectron2/modeling/roi_heads/__init__.py b/spaces/CVPR/regionclip-demo/detectron2/modeling/roi_heads/__init__.py deleted file mode 100644 index 3950e1bc22dfd9024b5371ae9fdb0fe4a45ab0e1..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/modeling/roi_heads/__init__.py +++ /dev/null @@ -1,35 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .box_head import ROI_BOX_HEAD_REGISTRY, build_box_head, FastRCNNConvFCHead -from .keypoint_head import ( - ROI_KEYPOINT_HEAD_REGISTRY, - build_keypoint_head, - BaseKeypointRCNNHead, - KRCNNConvDeconvUpsampleHead, -) -from .mask_head import ( - ROI_MASK_HEAD_REGISTRY, - build_mask_head, - BaseMaskRCNNHead, - MaskRCNNConvUpsampleHead, -) -from .roi_heads import ( - ROI_HEADS_REGISTRY, - ROIHeads, - Res5ROIHeads, - StandardROIHeads, - build_roi_heads, - select_foreground_proposals, -) -from .clip_roi_heads import ( - CLIPRes5ROIHeads, - CLIPSwinROIHeads, - PretrainRes5ROIHeads, - CLIPStandardROIHeads, -) -from .cascade_rcnn import CascadeROIHeads -from .rotated_fast_rcnn import RROIHeads -from .fast_rcnn import FastRCNNOutputLayers - -from . import cascade_rcnn # isort:skip - -__all__ = list(globals().keys()) diff --git a/spaces/CarlDennis/Lovelive-VITS-JPZH/README.md b/spaces/CarlDennis/Lovelive-VITS-JPZH/README.md deleted file mode 100644 index 2715416b83025d8928e6298f238b9db6690028f4..0000000000000000000000000000000000000000 --- a/spaces/CarlDennis/Lovelive-VITS-JPZH/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Lovelive VITS JPZH -emoji: 📈 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.4.1 -app_file: app.py -pinned: false -license: cc-by-nc-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ChallengeHub/Chinese-LangChain/tests/test_duckduckgo_search.py b/spaces/ChallengeHub/Chinese-LangChain/tests/test_duckduckgo_search.py deleted file mode 100644 index b3a69dd0d0581e8c04cf7a17ecb95d3dab135e91..0000000000000000000000000000000000000000 --- a/spaces/ChallengeHub/Chinese-LangChain/tests/test_duckduckgo_search.py +++ /dev/null @@ -1,16 +0,0 @@ -from duckduckgo_search import ddg -from duckduckgo_search.utils import SESSION - - -SESSION.proxies = { - "http": f"socks5h://localhost:7890", - "https": f"socks5h://localhost:7890" -} -r = ddg("马保国") -print(r[:2]) -""" -[{'title': '马保国 - 维基百科,自由的百科全书', 'href': 'https://zh.wikipedia.org/wiki/%E9%A9%AC%E4%BF%9D%E5%9B%BD', 'body': '马保国(1951年 — ) ,男,籍贯 山东 临沂,出生及长大于河南,中国大陆太极拳师,自称"浑元形意太极门掌门人" 。 马保国因2017年约战mma格斗家徐晓冬首次出现 -大众视野中。 2020年5月,马保国在对阵民间武术爱好者王庆民的比赛中,30秒内被连续高速击倒三次,此事件成为了持续多日的社交 ...'}, {'title': '馬保國的主页 - 抖音', 'href': 'https://www.douyin.com/user/MS4wLjABAAAAW0E1ziOvxgUh3VVv5FE6xmoo3w5WtZalfphYZKj4mCg', 'body': '6.3万. #马马国教扛打功 最近有几个人模芳我动作,很危险啊,不可以的,朋友们不要受伤了。. 5.3万. #马保国直播带货榜第一 朋友们周末愉快,本周六早上湿点,我本人在此号进行第一次带货直播,活到老,学到老,越活越年轻。. 7.0万. #马保国击破红牛罐 昨天 ...'}] - - -""" \ No newline at end of file diff --git a/spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/mysite/andrew_alpha/0_object_detection_model/GroundingDINO_SwinT_OGC.cfg.py b/spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/mysite/andrew_alpha/0_object_detection_model/GroundingDINO_SwinT_OGC.cfg.py deleted file mode 100644 index 9158d5f6260ec74bded95377d382387430d7cd70..0000000000000000000000000000000000000000 --- a/spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/mysite/andrew_alpha/0_object_detection_model/GroundingDINO_SwinT_OGC.cfg.py +++ /dev/null @@ -1,43 +0,0 @@ -batch_size = 1 -modelname = "groundingdino" -backbone = "swin_T_224_1k" -position_embedding = "sine" -pe_temperatureH = 20 -pe_temperatureW = 20 -return_interm_indices = [1, 2, 3] -backbone_freeze_keywords = None -enc_layers = 6 -dec_layers = 6 -pre_norm = False -dim_feedforward = 2048 -hidden_dim = 256 -dropout = 0.0 -nheads = 8 -num_queries = 900 -query_dim = 4 -num_patterns = 0 -num_feature_levels = 4 -enc_n_points = 4 -dec_n_points = 4 -two_stage_type = "standard" -two_stage_bbox_embed_share = False -two_stage_class_embed_share = False -transformer_activation = "relu" -dec_pred_bbox_embed_share = True -dn_box_noise_scale = 1.0 -dn_label_noise_ratio = 0.5 -dn_label_coef = 1.0 -dn_bbox_coef = 1.0 -embed_init_tgt = True -dn_labelbook_size = 2000 -max_text_len = 256 -text_encoder_type = "bert-base-uncased" -use_text_enhancer = True -use_fusion_layer = True -use_checkpoint = True -use_transformer_ckpt = True -use_text_cross_attention = True -text_dropout = 0.0 -fusion_dropout = 0.0 -fusion_droppath = 0.1 -sub_sentence_present = True diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/other/restart.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/other/restart.js deleted file mode 100644 index f0ef2bd228fab79e6a5de476bd0842e999060c06..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/other/restart.js +++ /dev/null @@ -1,122 +0,0 @@ -import plugin from '../../lib/plugins/plugin.js' -import { createRequire } from 'module' - -const require = createRequire(import.meta.url) -const { exec } = require('child_process') - -export class Restart extends plugin { - constructor (e = '') { - super({ - name: '重启', - dsc: '#重启', - event: 'message', - priority: 10, - rule: [{ - reg: '^#重启$', - fnc: 'restart', - permission: 'master' - }, { - reg: '^#(停机|关机)$', - fnc: 'stop', - permission: 'master' - }] - }) - - if (e) this.e = e - - this.key = 'Yz:restart' - } - - async init () { - let restart = await redis.get(this.key) - if (restart) { - restart = JSON.parse(restart) - let time = restart.time || new Date().getTime() - time = (new Date().getTime() - time) / 1000 - - let msg = `重启成功:耗时${time.toFixed(2)}秒` - - if (restart.isGroup) - Bot.sendGroupMsg(restart.bot_id, restart.id, msg) - else - Bot.sendFriendMsg(restart.bot_id, restart.id, msg) - - redis.del(this.key) - } - } - - async restart () { - await this.e.reply('开始执行重启,请稍等...') - logger.mark(`${this.e.logFnc} 开始执行重启,请稍等...`) - - let data = JSON.stringify({ - isGroup: !!this.e.isGroup, - id: this.e.isGroup ? this.e.group_id : this.e.user_id, - bot_id: this.e.self_id, - time: new Date().getTime() - }) - - let npm = await this.checkPnpm() - - try { - await redis.set(this.key, data, { EX: 120 }) - let cm = `${npm} start` - if (process.argv[1].includes('pm2')) { - cm = `${npm} run restart` - } - - exec(cm, { windowsHide: true }, (error, stdout, stderr) => { - if (error) { - redis.del(this.key) - this.e.reply(`操作失败!\n${error.stack}`) - logger.error(`重启失败\n${error.stack}`) - } else if (stdout) { - logger.mark('重启成功,运行已由前台转为后台') - logger.mark(`查看日志请用命令:${npm} run log`) - logger.mark(`停止后台运行命令:${npm} stop`) - process.exit() - } - }) - } catch (error) { - redis.del(this.key) - let e = error.stack ?? error - this.e.reply(`操作失败!\n${e}`) - } - - return true - } - - async checkPnpm () { - let npm = 'npm' - let ret = await this.execSync('pnpm -v') - if (ret.stdout) npm = 'pnpm' - return npm - } - - async execSync (cmd) { - return new Promise((resolve, reject) => { - exec(cmd, { windowsHide: true }, (error, stdout, stderr) => { - resolve({ error, stdout, stderr }) - }) - }) - } - - async stop () { - if (!process.argv[1].includes('pm2')) { - logger.mark('关机成功,已停止运行') - await this.e.reply('关机成功,已停止运行') - process.exit() - } - - logger.mark('关机成功,已停止运行') - await this.e.reply('关机成功,已停止运行') - - let npm = await this.checkPnpm() - exec(`${npm} stop`, { windowsHide: true }, (error, stdout, stderr) => { - if (error) { - this.e.reply(`操作失败!\n${error.stack}`) - logger.error(`关机失败\n${error.stack}`) - } - }) - } -} \ No newline at end of file diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/ascension/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/ascension/__init__.py deleted file mode 100644 index 933c575873e7af8e9fca21c857a2c19f99f0cbe1..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/memes/ascension/__init__.py +++ /dev/null @@ -1,35 +0,0 @@ -from pathlib import Path -from typing import List - -from pil_utils import BuildImage - -from meme_generator import add_meme -from meme_generator.exception import TextOverLength - -img_dir = Path(__file__).parent / "images" - - -def ascension(images, texts: List[str], args): - frame = BuildImage.open(img_dir / "0.png") - text = f"你原本应该要去地狱的,但因为你生前{texts[0]},我们就当作你已经服完刑期了" - try: - frame.draw_text( - (40, 30, 482, 135), - text, - allow_wrap=True, - max_fontsize=50, - min_fontsize=20, - ) - except ValueError: - raise TextOverLength(texts[0]) - return frame.save_jpg() - - -add_meme( - "ascension", - ascension, - min_texts=1, - max_texts=1, - default_texts=["学的是机械"], - keywords=["升天"], -) diff --git a/spaces/Cong723/gpt-academic-public/docs/README_RS.md b/spaces/Cong723/gpt-academic-public/docs/README_RS.md deleted file mode 100644 index f8d925a27a6e5a19304db6f6d266e3bb3163172f..0000000000000000000000000000000000000000 --- a/spaces/Cong723/gpt-academic-public/docs/README_RS.md +++ /dev/null @@ -1,291 +0,0 @@ -> **Note** -> -> Этот файл самовыражения автоматически генерируется модулем перевода markdown в этом проекте и может быть не на 100% правильным. -> - -# ChatGPT Academic Optimization - -**Если вам понравился этот проект, пожалуйста, поставьте ему звезду. Если вы придумали более полезные академические ярлыки или функциональные плагины, не стесняйтесь создавать запросы на изменение или пул-запросы. Мы также имеем [README на английском языке](docs/README_EN.md), переведенный этим же проектом. - -> **Примечание** -> -> 1. Пожалуйста, обратите внимание, что только функциonal plugins (buttons) с **красным цветом** могут читать файлы, некоторые из которых находятся в **выпадающем меню** плагинов. Кроме того, мы приветствуем и обрабатываем любые новые плагины с **наивысшим приоритетом**! -> -> 2. Функции каждого файла в этом проекте подробно описаны в собственном анализе [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) . При повторных итерациях вы также можете вызывать обновленный отчет функций проекта, щелкнув соответствующий функциональный плагин GPT. Часто задаваемые вопросы собраны в [`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98) . - -
    - -Функция | Описание ---- | --- -Редактирование одним кликом | Поддержка редактирования одним кликом, поиск грамматических ошибок в академических статьях -Переключение языков "Английский-Китайский" одним кликом | Одним кликом переключайте языки "Английский-Китайский" -Разъяснение программного кода одним кликом | Вы можете правильно отобразить и объяснить программный код. -[Настраиваемые сочетания клавиш](https://www.bilibili.com/video/BV14s4y1E7jN) | Поддержка настраиваемых сочетаний клавиш -[Настройка сервера-прокси](https://www.bilibili.com/video/BV1rc411W7Dr) | Поддержка настройки сервера-прокси -Модульный дизайн | Поддержка настраиваемых функциональных плагинов высших порядков и функциональных плагинов, поддерживающих [горячее обновление](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) -[Автоанализ программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] [Прочтение в один клик](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) кода программы проекта -[Анализ программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] Один клик для проанализирования дерева других проектов Python/C/C++/Java/Lua/... -Чтение статей| [Функциональный плагин] Одним кликом прочитайте весь латех (LaTex) текст статьи и сгенерируйте краткое описание -Перевод и редактирование всех статей из LaTex | [Функциональный плагин] Перевод или редактирование LaTex-статьи всего одним нажатием кнопки -Генерация комментариев в пакетном режиме | [Функциональный плагин] Одним кликом сгенерируйте комментарии к функциям в пакетном режиме -Генерация отчетов пакета CHAT | [Функциональный плагин] Автоматически создавайте сводные отчеты после выполнения -[Помощник по arxiv](https://www.bilibili.com/video/BV1LM4y1279X) | [Функциональный плагин] Введите URL статьи arxiv, чтобы легко перевести резюме и загрузить PDF-файл -[Перевод полного текста статьи в формате PDF](https://www.bilibili.com/video/BV1KT411x7Wn) | [Функциональный плагин] Извлеките заголовок статьи, резюме и переведите весь текст статьи (многопоточно) -[Помощник интеграции Google Scholar](https://www.bilibili.com/video/BV19L411U7ia) | [Функциональный плагин] Дайте GPT выбрать для вас интересные статьи на любой странице поиска Google Scholar. -Отображение формул/изображений/таблиц | Одновременно отображается tex-форма и рендер-форма формул, поддержка формул, высокоскоростных кодов -Поддержка функциональных плагинов многопоточности | Поддержка многопоточной работы с плагинами, обрабатывайте огромные объемы текста или программы одним кликом -Запуск темной темы gradio[подробнее](https://github.com/binary-husky/chatgpt_academic/issues/173) | Добавьте / ?__dark-theme=true в конец URL браузера, чтобы переключиться на темную тему. -[Поддержка нескольких моделей LLM](https://www.bilibili.com/video/BV1wT411p7yf), поддержка API2D | Находиться между GPT3.5, GPT4 и [清华ChatGLM](https://github.com/THUDM/ChatGLM-6B) должно быть очень приятно, не так ли? -Альтернатива huggingface без использования научной сети [Онлайн-эксперимент](https://huggingface.co/spaces/qingxu98/gpt-academic) | Войдите в систему, скопируйте пространство [этот пространственный URL](https://huggingface.co/spaces/qingxu98/gpt-academic) -…… | …… - - -
    - -- Новый интерфейс (вы можете изменить настройку LAYOUT в config.py, чтобы переключаться между "горизонтальным расположением" и "вертикальным расположением") -
    - -
    - - -Вы профессиональный переводчик научных статей. - -- Все кнопки генерируются динамически путем чтения functional.py и могут быть легко настроены под пользовательские потребности, освобождая буфер обмена. -
    - -
    - -- Редактирование/корректирование -
    - -
    - -- Если вывод содержит формулы, они отображаются одновременно как в формате tex, так и в рендеринговом формате для удобства копирования и чтения. -
    - -
    - -- Лень смотреть код проекта? Просто покажите chatgpt. -
    - -
    - -- Несколько моделей больших языковых моделей смешиваются (ChatGLM + OpenAI-GPT3.5 + [API2D] (https://api2d.com/) -GPT4) -
    - -
    - -Несколько моделей больших языковых моделей смешиваются в [бета-версии huggingface] (https://huggingface.co/spaces/qingxu98/academic-chatgpt-beta) (huggingface-версия не поддерживает chatglm). - - ---- - -## Установка - Метод 1: Запуск (Windows, Linux или MacOS) - -1. Скачайте проект -```sh -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -``` - -2. Настройка API_KEY и настройки прокси - -В файле `config.py` настройте зарубежный прокси и OpenAI API KEY, пояснения ниже -``` -1. Если вы находитесь в Китае, вам нужно настроить зарубежный прокси, чтобы использовать OpenAI API. Пожалуйста, внимательно прочитайте config.py для получения инструкций (1. Измените USE_PROXY на True; 2. Измените прокси в соответствии с инструкциями). -2. Настройка API KEY OpenAI. Вам необходимо зарегистрироваться на сайте OpenAI и получить API KEY. После получения API KEY настройте его в файле config.py. -3. Вопросы, связанные с сетевыми проблемами (тайм-аут сети, прокси не работает), можно найти здесь: https://github.com/binary-husky/chatgpt_academic/issues/1 -``` -(Примечание: при запуске программы будет проверяться наличие конфиденциального файла конфигурации с именем `config_private.py` и использоваться в нем конфигурация параметров, которая перезаписывает параметры с такими же именами в `config.py`. Поэтому, если вы понимаете логику чтения нашей конфигурации, мы настоятельно рекомендуем вам создать новый файл конфигурации с именем `config_private.py` рядом с `config.py` и переместить (скопировать) настройки из `config.py` в `config_private.py`. `config_private.py` не подвергается контролю git, что делает конфиденциальную информацию более безопасной.) - - -3. Установить зависимости -```sh -# (Выбор 1) Рекомендуется -python -m pip install -r requirements.txt - -# (Выбор 2) Если вы используете anaconda, то шаги будут аналогичны: -# (Шаг 2.1) conda create -n gptac_venv python=3.11 -# (Шаг 2.2) conda activate gptac_venv -# (Шаг 2.3) python -m pip install -r requirements.txt - -# Примечание: используйте официальный источник pip или источник pip.aliyun.com. Другие источники pip могут вызывать проблемы. временный метод замены источника: -# python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ -``` - -Если требуется поддержка TUNA ChatGLM, необходимо установить дополнительные зависимости (если вы неудобны с python, необходимо иметь хорошую конфигурацию компьютера): -```sh -python -m pip install -r request_llm/requirements_chatglm.txt -``` - -4. Запустите -```sh -python main.py -``` - -5. Тестовые функции плагина -``` -- Тестирвоание анализа проекта Python - В основной области введите `./crazy_functions/test_project/python/dqn` , а затем нажмите "Анализировать весь проект Python" -- Тестирование самостоятельного чтения кода - Щелкните " [Демонстрационный режим многопоточности] Проанализируйте сам проект (расшифровка источника кода)" -- Тестирование функций шаблонного плагина (вы можете использовать эту функцию как шаблон для более сложных функций, требующих ответа от gpt в связи с тем, что произошло сегодня в истории) - Щелкните " [Функции шаблонного плагина] День в истории" -- На нижней панели дополнительные функции для выбора -``` - -## Установка - Метод 2: Использование docker (Linux) - - -1. Только ChatGPT (рекомендуется для большинства пользователей): -``` sh -# Скачать проект -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -# Настроить прокси за границей и OpenAI API KEY -Отредактируйте файл config.py в любом текстовом редакторе. -# Установка -docker build -t gpt-academic . -# Запустить -docker run --rm -it --net=host gpt-academic - -# Проверка функциональности плагина -## Проверка шаблонной функции плагина (требуется, чтобы gpt ответил, что произошло "в истории на этот день"), вы можете использовать эту функцию в качестве шаблона для реализации более сложных функций. -Нажмите "[Шаблонный демонстрационный плагин] История на этот день". -## Тест абстрактного резюме для проекта на Latex -В области ввода введите ./crazy_functions/test_project/latex/attention, а затем нажмите "Чтение реферата о тезисах статьи на LaTeX". -## Тестовый анализ проекта на Python -Введите в область ввода ./crazy_functions/test_project/python/dqn, затем нажмите "Проанализировать весь проект на Python". - -Выбирайте больше функциональных плагинов в нижнем выпадающем меню. -``` - -2. ChatGPT + ChatGLM (требуется глубокое знание Docker и достаточно мощное компьютерное оборудование): - -``` sh -# Изменение Dockerfile -cd docs && nano Dockerfile+ChatGLM -# Как построить | Как запустить (Dockerfile+ChatGLM в пути docs, сначала перейдите в папку с помощью cd docs) -docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM . -# Как запустить | Как запустить (2) я хочу войти в контейнер и сделать какие-то настройки до запуска: -docker run --rm -it --net=host --gpus=all gpt-academic bash -``` - - -## Установка-Метод 3: Другие способы развертывания - -1. Развертывание на удаленном облачном сервере -Пожалуйста, посетите [Deploy Wiki-1] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97) - -2. Использование WSL2 (Windows Subsystem for Linux) -Пожалуйста, посетите [Deploy Wiki-2] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2) - - -## Установка-Настройки прокси -### Метод 1: Обычный способ -[Конфигурация прокси] (https://github.com/binary-husky/chatgpt_academic/issues/1) - -### Метод 2: Руководство новичка -[Руководство новичка] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89) - - ---- - -## Настройка новой удобной кнопки (настройка быстрой клавиши для научной работы) -Откройте `core_functional.py` любым текстовым редактором, добавьте элементы, как показано ниже, затем перезапустите программу. (Если кнопка уже успешно добавлена и видна, то префикс и суффикс поддерживают горячее изменение, чтобы они оказались в действии, не нужно перезапускать программу.) -например -``` -"Супер анг-рус": { - # Префикс, будет добавлен перед вашим вводом. Например, используется для описания ваших потребностей, таких как перевод, кодинг, редактирование и т. д. - "Prefix": "Пожалуйста, переведите этот фрагмент на русский язык, а затем создайте пошаговую таблицу в markdown, чтобы объяснить все специализированные термины, которые встречаются в тексте:\n\n", - - # Суффикс, будет добавлен после вашего ввода. Например, совместно с префиксом можно обрамить ваш ввод в кавычки. - "Suffix": "", -}, -``` -
    - -
    - ---- - - -## Демонстрация некоторых возможностей - -### Отображение изображений: - -
    - -
    - - -### Если программа может понимать и разбирать сама себя: - -
    - -
    - -
    - -
    - - -### Анализ других проектов на Python/Cpp: -
    - -
    - -
    - -
    - -### Генерация понимания и абстрактов с помощью Латех статей в один клик -
    - -
    - -### Автоматическое создание отчетов -
    - - - -
    - -### Модульный дизайн функций -
    - - -
    - - -### Трансляция исходного кода на английский язык - -
    - -
    - -## Todo и планирование версий: -- version 3.2+ (todo): функция плагины поддерживают более многочисленные интерфейсы параметров -- version 3.1: поддержка одновременного опроса нескольких моделей gpt! Поддержка api2d, поддержка балансировки нагрузки множества apikey. -- version 3.0: поддержка chatglm и других маленьких llm -- version 2.6: реструктурировал структуру плагинов, повысил интерактивность, добавил больше плагинов -- version 2.5: само обновление, решение проблемы слишком длинного текста и переполнения токена при переводе всего проекта исходного кода -- version 2.4: (1) добавлена функция перевода всего PDF-документа; (2) добавлена функция изменения положения входной области; (3) добавлена опция вертикального макета; (4) оптимизация функций многопоточности плагина. -- version 2.3: улучшение многопоточной интерактивности -- version 2.2: функция плагинов поддерживает горячую перезагрузку -- version 2.1: блочная раскладка -- version 2.0: модульный дизайн функций плагина -- version 1.0: основные функции - -## Ссылки на изучение и обучение - -``` -В коде использовано много хороших дизайнерских решений из других отличных проектов, в том числе: - -# Project1: использование многих приемов из ChuanhuChatGPT -https://github.com/GaiZhenbiao/ChuanhuChatGPT - -# Project2: ChatGLM-6B в Тхуде: -https://github.com/THUDM/ChatGLM-6B -``` - diff --git a/spaces/Cropinky/esrgan/realesrgan/models/__init__.py b/spaces/Cropinky/esrgan/realesrgan/models/__init__.py deleted file mode 100644 index 0be7105dc75d150c49976396724085f678dc0675..0000000000000000000000000000000000000000 --- a/spaces/Cropinky/esrgan/realesrgan/models/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -import importlib -from basicsr.utils import scandir -from os import path as osp - -# automatically scan and import model modules for registry -# scan all the files that end with '_model.py' under the model folder -model_folder = osp.dirname(osp.abspath(__file__)) -model_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(model_folder) if v.endswith('_model.py')] -# import all the model modules -_model_modules = [importlib.import_module(f'realesrgan.models.{file_name}') for file_name in model_filenames] diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/proc.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/proc.py deleted file mode 100644 index a6621c00b1cc3f4efa60b1dbaac72d8717f565b3..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/proc.py +++ /dev/null @@ -1,51 +0,0 @@ -import multiprocessing - -def cpu_count(): - return multiprocessing.cpu_count() - -def get_pool(processes): - pool = multiprocessing.Pool(processes = processes) - return pool - -def wait_for_pool(pool): - pool.close() - pool.join() - -def set_proc_name(name): - import setproctitle - setproctitle.setproctitle(name) - -def kill(pid): - import util - if type(pid) == list: - for p in pid: - kill(p) - elif type(pid) == int: - cmd = 'kill -9 %d'%(pid) - print cmd - print util.cmd.cmd(cmd) - elif type(pid) == str: - pids = get_pid(pid) - kill(pids) - else: - raise ValueError, 'Not supported parameter type:', type(pid) - -def ps_aux_grep(pattern): - import util - cmd = 'ps aux|grep %s'%(pattern) - return util.cmd.cmd(cmd) - - -def get_pid(pattern): - import util - cmd = 'ps aux|grep %s'%(pattern) - results = util.cmd.cmd(cmd) - results = util.str.split(results, '\n') - pids = [] - for result in results: - info = result.split() - if len(info) > 0: - pid = int(info[1]) - pids.append(pid) - return pids - diff --git a/spaces/Cyril666/my_abi/utils.py b/spaces/Cyril666/my_abi/utils.py deleted file mode 100644 index 1b7b5db1bc1dd191191c31b3e72228ccd1c4f7a1..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/my_abi/utils.py +++ /dev/null @@ -1,304 +0,0 @@ -import logging -import os -import time - -import cv2 -import numpy as np -import torch -import yaml -from matplotlib import colors -from matplotlib import pyplot as plt -from torch import Tensor, nn -from torch.utils.data import ConcatDataset - -class CharsetMapper(object): - """A simple class to map ids into strings. - - It works only when the character set is 1:1 mapping between individual - characters and individual ids. - """ - - def __init__(self, - filename='', - max_length=30, - null_char=u'\u2591'): - """Creates a lookup table. - - Args: - filename: Path to charset file which maps characters to ids. - max_sequence_length: The max length of ids and string. - null_char: A unicode character used to replace '' character. - the default value is a light shade block '░'. - """ - self.null_char = null_char - self.max_length = max_length - - self.label_to_char = self._read_charset(filename) - self.char_to_label = dict(map(reversed, self.label_to_char.items())) - self.num_classes = len(self.label_to_char) - - def _read_charset(self, filename): - """Reads a charset definition from a tab separated text file. - - Args: - filename: a path to the charset file. - - Returns: - a dictionary with keys equal to character codes and values - unicode - characters. - """ - import re - pattern = re.compile(r'(\d+)\t(.+)') - charset = {} - self.null_label = 0 - charset[self.null_label] = self.null_char - with open(filename, 'r') as f: - for i, line in enumerate(f): - m = pattern.match(line) - assert m, f'Incorrect charset file. line #{i}: {line}' - label = int(m.group(1)) + 1 - char = m.group(2) - charset[label] = char - return charset - - def trim(self, text): - assert isinstance(text, str) - return text.replace(self.null_char, '') - - def get_text(self, labels, length=None, padding=True, trim=False): - """ Returns a string corresponding to a sequence of character ids. - """ - length = length if length else self.max_length - labels = [l.item() if isinstance(l, Tensor) else int(l) for l in labels] - if padding: - labels = labels + [self.null_label] * (length-len(labels)) - text = ''.join([self.label_to_char[label] for label in labels]) - if trim: text = self.trim(text) - return text - - def get_labels(self, text, length=None, padding=True, case_sensitive=False): - """ Returns the labels of the corresponding text. - """ - length = length if length else self.max_length - if padding: - text = text + self.null_char * (length - len(text)) - if not case_sensitive: - text = text.lower() - labels = [self.char_to_label[char] for char in text] - return labels - - def pad_labels(self, labels, length=None): - length = length if length else self.max_length - - return labels + [self.null_label] * (length - len(labels)) - - @property - def digits(self): - return '0123456789' - - @property - def digit_labels(self): - return self.get_labels(self.digits, padding=False) - - @property - def alphabets(self): - all_chars = list(self.char_to_label.keys()) - valid_chars = [] - for c in all_chars: - if c in 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ': - valid_chars.append(c) - return ''.join(valid_chars) - - @property - def alphabet_labels(self): - return self.get_labels(self.alphabets, padding=False) - - -class Timer(object): - """A simple timer.""" - def __init__(self): - self.data_time = 0. - self.data_diff = 0. - self.data_total_time = 0. - self.data_call = 0 - self.running_time = 0. - self.running_diff = 0. - self.running_total_time = 0. - self.running_call = 0 - - def tic(self): - self.start_time = time.time() - self.running_time = self.start_time - - def toc_data(self): - self.data_time = time.time() - self.data_diff = self.data_time - self.running_time - self.data_total_time += self.data_diff - self.data_call += 1 - - def toc_running(self): - self.running_time = time.time() - self.running_diff = self.running_time - self.data_time - self.running_total_time += self.running_diff - self.running_call += 1 - - def total_time(self): - return self.data_total_time + self.running_total_time - - def average_time(self): - return self.average_data_time() + self.average_running_time() - - def average_data_time(self): - return self.data_total_time / (self.data_call or 1) - - def average_running_time(self): - return self.running_total_time / (self.running_call or 1) - - -class Logger(object): - _handle = None - _root = None - - @staticmethod - def init(output_dir, name, phase): - format = '[%(asctime)s %(filename)s:%(lineno)d %(levelname)s {}] ' \ - '%(message)s'.format(name) - logging.basicConfig(level=logging.INFO, format=format) - - try: os.makedirs(output_dir) - except: pass - config_path = os.path.join(output_dir, f'{phase}.txt') - Logger._handle = logging.FileHandler(config_path) - Logger._root = logging.getLogger() - - @staticmethod - def enable_file(): - if Logger._handle is None or Logger._root is None: - raise Exception('Invoke Logger.init() first!') - Logger._root.addHandler(Logger._handle) - - @staticmethod - def disable_file(): - if Logger._handle is None or Logger._root is None: - raise Exception('Invoke Logger.init() first!') - Logger._root.removeHandler(Logger._handle) - - -class Config(object): - - def __init__(self, config_path, host=True): - def __dict2attr(d, prefix=''): - for k, v in d.items(): - if isinstance(v, dict): - __dict2attr(v, f'{prefix}{k}_') - else: - if k == 'phase': - assert v in ['train', 'test'] - if k == 'stage': - assert v in ['pretrain-vision', 'pretrain-language', - 'train-semi-super', 'train-super'] - self.__setattr__(f'{prefix}{k}', v) - - assert os.path.exists(config_path), '%s does not exists!' % config_path - with open(config_path) as file: - config_dict = yaml.load(file, Loader=yaml.FullLoader) - with open('configs/template.yaml') as file: - default_config_dict = yaml.load(file, Loader=yaml.FullLoader) - __dict2attr(default_config_dict) - __dict2attr(config_dict) - self.global_workdir = os.path.join(self.global_workdir, self.global_name) - - def __getattr__(self, item): - attr = self.__dict__.get(item) - if attr is None: - attr = dict() - prefix = f'{item}_' - for k, v in self.__dict__.items(): - if k.startswith(prefix): - n = k.replace(prefix, '') - attr[n] = v - return attr if len(attr) > 0 else None - else: - return attr - - def __repr__(self): - str = 'ModelConfig(\n' - for i, (k, v) in enumerate(sorted(vars(self).items())): - str += f'\t({i}): {k} = {v}\n' - str += ')' - return str - -def blend_mask(image, mask, alpha=0.5, cmap='jet', color='b', color_alpha=1.0): - # normalize mask - mask = (mask-mask.min()) / (mask.max() - mask.min() + np.finfo(float).eps) - if mask.shape != image.shape: - mask = cv2.resize(mask,(image.shape[1], image.shape[0])) - # get color map - color_map = plt.get_cmap(cmap) - mask = color_map(mask)[:,:,:3] - # convert float to uint8 - mask = (mask * 255).astype(dtype=np.uint8) - - # set the basic color - basic_color = np.array(colors.to_rgb(color)) * 255 - basic_color = np.tile(basic_color, [image.shape[0], image.shape[1], 1]) - basic_color = basic_color.astype(dtype=np.uint8) - # blend with basic color - blended_img = cv2.addWeighted(image, color_alpha, basic_color, 1-color_alpha, 0) - # blend with mask - blended_img = cv2.addWeighted(blended_img, alpha, mask, 1-alpha, 0) - - return blended_img - -def onehot(label, depth, device=None): - """ - Args: - label: shape (n1, n2, ..., ) - depth: a scalar - - Returns: - onehot: (n1, n2, ..., depth) - """ - if not isinstance(label, torch.Tensor): - label = torch.tensor(label, device=device) - onehot = torch.zeros(label.size() + torch.Size([depth]), device=device) - onehot = onehot.scatter_(-1, label.unsqueeze(-1), 1) - - return onehot - -class MyDataParallel(nn.DataParallel): - - def gather(self, outputs, target_device): - r""" - Gathers tensors from different GPUs on a specified device - (-1 means the CPU). - """ - def gather_map(outputs): - out = outputs[0] - if isinstance(out, (str, int, float)): - return out - if isinstance(out, list) and isinstance(out[0], str): - return [o for out in outputs for o in out] - if isinstance(out, torch.Tensor): - return torch.nn.parallel._functions.Gather.apply(target_device, self.dim, *outputs) - if out is None: - return None - if isinstance(out, dict): - if not all((len(out) == len(d) for d in outputs)): - raise ValueError('All dicts must have the same number of keys') - return type(out)(((k, gather_map([d[k] for d in outputs])) - for k in out)) - return type(out)(map(gather_map, zip(*outputs))) - - # Recursive function calls like this create reference cycles. - # Setting the function to None clears the refcycle. - try: - res = gather_map(outputs) - finally: - gather_map = None - return res - - -class MyConcatDataset(ConcatDataset): - def __getattr__(self, k): - return getattr(self.datasets[0], k) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-2908e8a9.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-2908e8a9.css deleted file mode 100644 index 78067c2729600b4ee3e7e9c6442a129e8ffe9894..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-2908e8a9.css +++ /dev/null @@ -1 +0,0 @@ -.gradio-bokeh.svelte-1fe5ixn.svelte-1fe5ixn{display:flex;justify-content:center}.layout.svelte-1fe5ixn.svelte-1fe5ixn{display:flex;flex-direction:column;justify-content:center;align-items:center;width:var(--size-full);height:var(--size-full);color:var(--body-text-color)}.altair.svelte-1fe5ixn.svelte-1fe5ixn{display:flex;flex-direction:column;justify-content:center;align-items:center;width:var(--size-full);height:var(--size-full)}.caption.svelte-1fe5ixn.svelte-1fe5ixn{font-size:var(--text-sm)}.matplotlib.svelte-1fe5ixn img.svelte-1fe5ixn{object-fit:contain} diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-3c29bea1.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-3c29bea1.js deleted file mode 100644 index 02ba642af97a89cb79638761a5eae4078411fde9..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-3c29bea1.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as L,e as M,s as j,N as w,K as o,U as c,p as g,n as H,A as d,B,k as h,o as v,z as b,v as k,x as T,E as S,ae as q,O as z,q as C,r as E,F as A}from"./index-1d65707a.js";import{B as D}from"./Button-f155035a.js";function F(t){let e,i;return{c(){e=w("div"),o(e,"class",i="prose "+t[1].join(" ")+" svelte-1ybaih5"),o(e,"id",t[0]),c(e,"min",t[4]),c(e,"hide",!t[3])},m(s,n){g(s,e,n),e.innerHTML=t[2]},p(s,[n]){n&4&&(e.innerHTML=s[2]),n&2&&i!==(i="prose "+s[1].join(" ")+" svelte-1ybaih5")&&o(e,"class",i),n&1&&o(e,"id",s[0]),n&18&&c(e,"min",s[4]),n&10&&c(e,"hide",!s[3])},i:H,o:H,d(s){s&&d(e)}}}function K(t,e,i){let{elem_id:s=""}=e,{elem_classes:n=[]}=e,{value:m}=e,{visible:u=!0}=e,{min_height:f=!1}=e;const l=B();return t.$$set=a=>{"elem_id"in a&&i(0,s=a.elem_id),"elem_classes"in a&&i(1,n=a.elem_classes),"value"in a&&i(2,m=a.value),"visible"in a&&i(3,u=a.visible),"min_height"in a&&i(4,f=a.min_height)},t.$$.update=()=>{t.$$.dirty&4&&l("change")},[s,n,m,u,f]}class N extends L{constructor(e){super(),M(this,e,K,F,j,{elem_id:0,elem_classes:1,value:2,visible:3,min_height:4})}}function O(t){let e,i,s,n,m;const u=[t[4],{variant:"center"}];let f={};for(let l=0;l{"label"in _&&i(5,s=_.label),"elem_id"in _&&i(0,n=_.elem_id),"elem_classes"in _&&i(1,m=_.elem_classes),"visible"in _&&i(2,u=_.visible),"value"in _&&i(3,f=_.value),"loading_status"in _&&i(4,l=_.loading_status)},t.$$.update=()=>{t.$$.dirty&32&&a("change")},[n,m,u,f,l,s,r]}class I extends L{constructor(e){super(),M(this,e,G,U,j,{label:5,elem_id:0,elem_classes:1,visible:2,value:3,loading_status:4})}}const Q=I,R=["static"],V=t=>({type:{payload:"string"},description:{payload:"HTML output"}});export{Q as Component,V as document,R as modes}; -//# sourceMappingURL=index-3c29bea1.js.map diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-81c41db1.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-81c41db1.js deleted file mode 100644 index 7a9c6a3bffaf32ddc87bae0e4cea6c1004a4cc98..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-81c41db1.js +++ /dev/null @@ -1,2 +0,0 @@ -import{T as l}from"./Textbox-086bc878.js";import"./index-3370be2a.js";/* empty css */import"./Button-89624748.js";import"./BlockTitle-bcf8c05e.js";import"./Info-5611e10f.js";import"./Copy-6cd42558.js";const a=["static","dynamic"],n=t=>({type:{payload:"string"},description:{payload:"text string"},example_data:t.value||"hello world"});export{l as Component,n as document,a as modes}; -//# sourceMappingURL=index-81c41db1.js.map diff --git a/spaces/DataForGood/bechdelai-demo/bechdelaidemo/utils.py b/spaces/DataForGood/bechdelai-demo/bechdelaidemo/utils.py deleted file mode 100644 index 0b0f0d7c6ae66b892e35d4417baeeb97b578a65e..0000000000000000000000000000000000000000 --- a/spaces/DataForGood/bechdelai-demo/bechdelaidemo/utils.py +++ /dev/null @@ -1,86 +0,0 @@ -from pytube import YouTube -import moviepy.editor as mp - - -def download_youtube_video(link: str, filename: str, caption_language: str = "en") -> None: - """Download a youtube video with captions given an id - - Parameters - ---------- - link : str - Youtube video link - filename : str - File name to save the video and the caption - caption_language : str - Language caption to download - - Raises - ------ - TypeError - url must be a string - ValueError - url must start with 'http' - """ - try: - yt = YouTube(link) - except: - print("Connection Error") - return - - filename = filename if filename.endswith(".mp4") else filename + ".mp4" - - try: - ( - yt.streams.filter(progressive=True, file_extension="mp4") - .order_by("resolution") - .desc() - .first() - ).download(filename=filename) - - except Exception as e: - print("Could not download the video!", e) - - try: - captions = { - k: v - for k, v in yt.captions.lang_code_index.items() - if caption_language in k - } - for lang, caption in captions.items(): - caption.download(title=f"caption_{lang}", srt=False) - except Exception as e: - print("Could not download the caption!", e) - print("Task Completed!") - - -def download_youtube_audio(link:str,filename:str = "audio.mp3") -> str: - yt = YouTube(link) - stream = yt.streams.filter(only_audio=True)[0] - stream.download(filename=filename) - return filename - - -def import_as_clip(path_to_video: str) -> mp.VideoFileClip: - """Imports a video file as a VideoFileClip object. - - Parameters: - path_to_video (str): Path to a video file. - - Returns: - mp.VideoFileClip: VideoFileClip object. - """ - return mp.VideoFileClip(path_to_video) - -def extract_audio_from_movie(file: str, extension: str = '.wav') -> None: - """Extract the audio from a film and save it to a file. - - The audio is saved in the same directory as the film. - - Parameters: - file (str): The name of the film file to extract the audio from. - extension (str): The file extension of the audio file to save (default is ".wav"). - """ - clip = import_as_clip(file) - filename = file.split(sep='.')[0] + extension - clip.audio.write_audiofile(filename) - return filename diff --git a/spaces/DeepDrivePL/PaddleSeg-Matting/matting/model/loss.py b/spaces/DeepDrivePL/PaddleSeg-Matting/matting/model/loss.py deleted file mode 100644 index 684c35ba2e5f0e44e5f6f92ff0c42cfc46ad2dbf..0000000000000000000000000000000000000000 --- a/spaces/DeepDrivePL/PaddleSeg-Matting/matting/model/loss.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import paddle -import paddle.nn as nn -import paddle.nn.functional as F - -from paddleseg.cvlibs import manager - - -@manager.LOSSES.add_component -class MRSD(nn.Layer): - def __init__(self, eps=1e-6): - super().__init__() - self.eps = eps - - def forward(self, logit, label, mask=None): - """ - Forward computation. - - Args: - logit (Tensor): Logit tensor, the data type is float32, float64. - label (Tensor): Label tensor, the data type is float32, float64. The shape should equal to logit. - mask (Tensor, optional): The mask where the loss valid. Default: None. - """ - if len(label.shape) == 3: - label = label.unsqueeze(1) - sd = paddle.square(logit - label) - loss = paddle.sqrt(sd + self.eps) - if mask is not None: - mask = mask.astype('float32') - if len(mask.shape) == 3: - mask = mask.unsqueeze(1) - loss = loss * mask - loss = loss.sum() / (mask.sum() + self.eps) - mask.stop_gradient = True - else: - loss = loss.mean() - - return loss diff --git a/spaces/Dimentian/LLMs-Stable-Vicuna-13B/app.py b/spaces/Dimentian/LLMs-Stable-Vicuna-13B/app.py deleted file mode 100644 index 23648ebb2af7e1db988c1553eed34eecaac64c62..0000000000000000000000000000000000000000 --- a/spaces/Dimentian/LLMs-Stable-Vicuna-13B/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/LLMs/Stable-Vicuna-13B").launch() \ No newline at end of file diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/training/networks_stylegan3.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/training/networks_stylegan3.py deleted file mode 100644 index d4b5b6ae121c6e8b89283f0763108b5471ea4af1..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/training/networks_stylegan3.py +++ /dev/null @@ -1,634 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Generator architecture from the paper -"Alias-Free Generative Adversarial Networks".""" - -import numpy as np -import scipy.signal -import scipy.optimize -import torch -import torch.nn.functional as F -from torch_utils import misc -from torch_utils import persistence -from torch_utils.ops import conv2d_gradfix -from torch_utils.ops import filtered_lrelu -from torch_utils.ops import bias_act - -# ---------------------------------------------------------------------------- - - -@misc.profiled_function -def modulated_conv2d( - # Input tensor: [batch_size, in_channels, in_height, in_width] - x, - # Weight tensor: [out_channels, in_channels, kernel_height, kernel_width] - w, - s, # Style tensor: [batch_size, in_channels] - demodulate=True, # Apply weight demodulation? - padding=0, # Padding: int or [padH, padW] - input_gain=None, # Optional scale factors for the input channels: [], [in_channels], or [batch_size, in_channels] -): - with misc.suppress_tracer_warnings(): # this value will be treated as a constant - batch_size = int(x.shape[0]) - out_channels, in_channels, kh, kw = w.shape - misc.assert_shape(w, [out_channels, in_channels, kh, kw]) # [OIkk] - misc.assert_shape(x, [batch_size, in_channels, None, None]) # [NIHW] - misc.assert_shape(s, [batch_size, in_channels]) # [NI] - - # Pre-normalize inputs. - if demodulate: - w = w * w.square().mean([1, 2, 3], keepdim=True).rsqrt() - s = s * s.square().mean().rsqrt() - - # Modulate weights. - w = w.unsqueeze(0) # [NOIkk] - w = w * s.unsqueeze(1).unsqueeze(3).unsqueeze(4) # [NOIkk] - - # Demodulate weights. - if demodulate: - dcoefs = (w.square().sum(dim=[2, 3, 4]) + 1e-8).rsqrt() # [NO] - w = w * dcoefs.unsqueeze(2).unsqueeze(3).unsqueeze(4) # [NOIkk] - - # Apply input scaling. - if input_gain is not None: - input_gain = input_gain.expand(batch_size, in_channels) # [NI] - w = w * input_gain.unsqueeze(1).unsqueeze(3).unsqueeze(4) # [NOIkk] - - # Execute as one fused op using grouped convolution. - x = x.reshape(1, -1, *x.shape[2:]) - w = w.reshape(-1, in_channels, kh, kw) - x = conv2d_gradfix.conv2d(input=x, weight=w.to( - x.dtype), padding=padding, groups=batch_size) - x = x.reshape(batch_size, -1, *x.shape[2:]) - return x - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class FullyConnectedLayer(torch.nn.Module): - def __init__(self, - in_features, # Number of input features. - out_features, # Number of output features. - # Activation function: 'relu', 'lrelu', etc. - activation='linear', - bias=True, # Apply additive bias before the activation function? - lr_multiplier=1, # Learning rate multiplier. - # Initial standard deviation of the weight tensor. - weight_init=1, - bias_init=0, # Initial value of the additive bias. - ): - super().__init__() - self.in_features = in_features - self.out_features = out_features - self.activation = activation - self.weight = torch.nn.Parameter(torch.randn( - [out_features, in_features]) * (weight_init / lr_multiplier)) - bias_init = np.broadcast_to(np.asarray( - bias_init, dtype=np.float32), [out_features]) - self.bias = torch.nn.Parameter(torch.from_numpy( - bias_init / lr_multiplier)) if bias else None - self.weight_gain = lr_multiplier / np.sqrt(in_features) - self.bias_gain = lr_multiplier - - def forward(self, x): - w = self.weight.to(x.dtype) * self.weight_gain - b = self.bias - if b is not None: - b = b.to(x.dtype) - if self.bias_gain != 1: - b = b * self.bias_gain - if self.activation == 'linear' and b is not None: - x = torch.addmm(b.unsqueeze(0), x, w.t()) - else: - x = x.matmul(w.t()) - x = bias_act.bias_act(x, b, act=self.activation) - return x - - def extra_repr(self): - return f'in_features={self.in_features:d}, out_features={self.out_features:d}, activation={self.activation:s}' - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class MappingNetwork(torch.nn.Module): - def __init__(self, - z_dim, # Input latent (Z) dimensionality. - # Conditioning label (C) dimensionality, 0 = no labels. - c_dim, - # Intermediate latent (W) dimensionality. - w_dim, - # Number of intermediate latents to output. - num_ws, - num_layers=2, # Number of mapping layers. - # Learning rate multiplier for the mapping layers. - lr_multiplier=0.01, - # Decay for tracking the moving average of W during training. - w_avg_beta=0.998, - ): - super().__init__() - self.z_dim = z_dim - self.c_dim = c_dim - self.w_dim = w_dim - self.num_ws = num_ws - self.num_layers = num_layers - self.w_avg_beta = w_avg_beta - - # Construct layers. - self.embed = FullyConnectedLayer( - self.c_dim, self.w_dim) if self.c_dim > 0 else None - features = [self.z_dim + (self.w_dim if self.c_dim > - 0 else 0)] + [self.w_dim] * self.num_layers - for idx, in_features, out_features in zip(range(num_layers), features[:-1], features[1:]): - layer = FullyConnectedLayer( - in_features, out_features, activation='lrelu', lr_multiplier=lr_multiplier) - setattr(self, f'fc{idx}', layer) - self.register_buffer('w_avg', torch.zeros([w_dim])) - - def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, update_emas=False): - misc.assert_shape(z, [None, self.z_dim]) - if truncation_cutoff is None: - truncation_cutoff = self.num_ws - - # Embed, normalize, and concatenate inputs. - x = z.to(torch.float32) - x = x * (x.square().mean(1, keepdim=True) + 1e-8).rsqrt() - if self.c_dim > 0: - misc.assert_shape(c, [None, self.c_dim]) - y = self.embed(c.to(torch.float32)) - y = y * (y.square().mean(1, keepdim=True) + 1e-8).rsqrt() - x = torch.cat([x, y], dim=1) if x is not None else y - - # Execute layers. - for idx in range(self.num_layers): - x = getattr(self, f'fc{idx}')(x) - - # Update moving average of W. - if update_emas: - self.w_avg.copy_(x.detach().mean( - dim=0).lerp(self.w_avg, self.w_avg_beta)) - - # Broadcast and apply truncation. - x = x.unsqueeze(1).repeat([1, self.num_ws, 1]) - if truncation_psi != 1: - x[:, :truncation_cutoff] = self.w_avg.lerp( - x[:, :truncation_cutoff], truncation_psi) - return x - - def extra_repr(self): - return f'z_dim={self.z_dim:d}, c_dim={self.c_dim:d}, w_dim={self.w_dim:d}, num_ws={self.num_ws:d}' - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class SynthesisInput(torch.nn.Module): - def __init__(self, - w_dim, # Intermediate latent (W) dimensionality. - channels, # Number of output channels. - size, # Output spatial size: int or [width, height]. - sampling_rate, # Output sampling rate. - bandwidth, # Output bandwidth. - ): - super().__init__() - self.w_dim = w_dim - self.channels = channels - self.size = np.broadcast_to(np.asarray(size), [2]) - self.sampling_rate = sampling_rate - self.bandwidth = bandwidth - - # Draw random frequencies from uniform 2D disc. - freqs = torch.randn([self.channels, 2]) - radii = freqs.square().sum(dim=1, keepdim=True).sqrt() - freqs /= radii * radii.square().exp().pow(0.25) - freqs *= bandwidth - phases = torch.rand([self.channels]) - 0.5 - - # Setup parameters and buffers. - self.weight = torch.nn.Parameter( - torch.randn([self.channels, self.channels])) - self.affine = FullyConnectedLayer( - w_dim, 4, weight_init=0, bias_init=[1, 0, 0, 0]) - # User-specified inverse transform wrt. resulting image. - self.register_buffer('transform', torch.eye(3, 3)) - self.register_buffer('freqs', freqs) - self.register_buffer('phases', phases) - - def forward(self, w): - # Introduce batch dimension. - transforms = self.transform.unsqueeze(0) # [batch, row, col] - freqs = self.freqs.unsqueeze(0) # [batch, channel, xy] - phases = self.phases.unsqueeze(0) # [batch, channel] - - # Apply learned transformation. - t = self.affine(w) # t = (r_c, r_s, t_x, t_y) - # t' = (r'_c, r'_s, t'_x, t'_y) - t = t / t[:, :2].norm(dim=1, keepdim=True) - # Inverse rotation wrt. resulting image. - m_r = torch.eye(3, device=w.device).unsqueeze( - 0).repeat([w.shape[0], 1, 1]) - m_r[:, 0, 0] = t[:, 0] # r'_c - m_r[:, 0, 1] = -t[:, 1] # r'_s - m_r[:, 1, 0] = t[:, 1] # r'_s - m_r[:, 1, 1] = t[:, 0] # r'_c - # Inverse translation wrt. resulting image. - m_t = torch.eye(3, device=w.device).unsqueeze( - 0).repeat([w.shape[0], 1, 1]) - m_t[:, 0, 2] = -t[:, 2] # t'_x - m_t[:, 1, 2] = -t[:, 3] # t'_y - # First rotate resulting image, then translate, and finally apply user-specified transform. - transforms = m_r @ m_t @ transforms - - # Transform frequencies. - phases = phases + (freqs @ transforms[:, :2, 2:]).squeeze(2) - freqs = freqs @ transforms[:, :2, :2] - - # Dampen out-of-band frequencies that may occur due to the user-specified transform. - amplitudes = (1 - (freqs.norm(dim=2) - self.bandwidth) / - (self.sampling_rate / 2 - self.bandwidth)).clamp(0, 1) - - # Construct sampling grid. - theta = torch.eye(2, 3, device=w.device) - theta[0, 0] = 0.5 * self.size[0] / self.sampling_rate - theta[1, 1] = 0.5 * self.size[1] / self.sampling_rate - grids = torch.nn.functional.affine_grid(theta.unsqueeze( - 0), [1, 1, self.size[1], self.size[0]], align_corners=False) - - # Compute Fourier features. - x = (grids.unsqueeze(3) @ freqs.permute(0, 2, 1).unsqueeze(1).unsqueeze(2) - ).squeeze(3) # [batch, height, width, channel] - x = x + phases.unsqueeze(1).unsqueeze(2) - x = torch.sin(x * (np.pi * 2)) - x = x * amplitudes.unsqueeze(1).unsqueeze(2) - - # Apply trainable mapping. - weight = self.weight / np.sqrt(self.channels) - x = x @ weight.t() - - # Ensure correct shape. - x = x.permute(0, 3, 1, 2) # [batch, channel, height, width] - misc.assert_shape(x, [w.shape[0], self.channels, - int(self.size[1]), int(self.size[0])]) - return x - - def extra_repr(self): - return '\n'.join([ - f'w_dim={self.w_dim:d}, channels={self.channels:d}, size={list(self.size)},', - f'sampling_rate={self.sampling_rate:g}, bandwidth={self.bandwidth:g}']) - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class SynthesisLayer(torch.nn.Module): - def __init__(self, - # Intermediate latent (W) dimensionality. - w_dim, - is_torgb, # Is this the final ToRGB layer? - is_critically_sampled, # Does this layer use critical sampling? - use_fp16, # Does this layer use FP16? - - # Input & output specifications. - in_channels, # Number of input channels. - out_channels, # Number of output channels. - # Input spatial size: int or [width, height]. - in_size, - # Output spatial size: int or [width, height]. - out_size, - in_sampling_rate, # Input sampling rate (s). - out_sampling_rate, # Output sampling rate (s). - # Input cutoff frequency (f_c). - in_cutoff, - # Output cutoff frequency (f_c). - out_cutoff, - # Input transition band half-width (f_h). - in_half_width, - # Output Transition band half-width (f_h). - out_half_width, - - # Hyperparameters. - # Convolution kernel size. Ignored for final the ToRGB layer. - conv_kernel=3, - # Low-pass filter size relative to the lower resolution when up/downsampling. - filter_size=6, - # Relative sampling rate for leaky ReLU. Ignored for final the ToRGB layer. - lrelu_upsampling=2, - # Use radially symmetric downsampling filter? Ignored for critically sampled layers. - use_radial_filters=False, - # Clamp the output to [-X, +X], None = disable clamping. - conv_clamp=256, - # Decay rate for the moving average of input magnitudes. - magnitude_ema_beta=0.999, - ): - super().__init__() - self.w_dim = w_dim - self.is_torgb = is_torgb - self.is_critically_sampled = is_critically_sampled - self.use_fp16 = use_fp16 - self.in_channels = in_channels - self.out_channels = out_channels - self.in_size = np.broadcast_to(np.asarray(in_size), [2]) - self.out_size = np.broadcast_to(np.asarray(out_size), [2]) - self.in_sampling_rate = in_sampling_rate - self.out_sampling_rate = out_sampling_rate - self.tmp_sampling_rate = max( - in_sampling_rate, out_sampling_rate) * (1 if is_torgb else lrelu_upsampling) - self.in_cutoff = in_cutoff - self.out_cutoff = out_cutoff - self.in_half_width = in_half_width - self.out_half_width = out_half_width - self.conv_kernel = 1 if is_torgb else conv_kernel - self.conv_clamp = conv_clamp - self.magnitude_ema_beta = magnitude_ema_beta - - # Setup parameters and buffers. - self.affine = FullyConnectedLayer( - self.w_dim, self.in_channels, bias_init=1) - self.weight = torch.nn.Parameter(torch.randn( - [self.out_channels, self.in_channels, self.conv_kernel, self.conv_kernel])) - self.bias = torch.nn.Parameter(torch.zeros([self.out_channels])) - self.register_buffer('magnitude_ema', torch.ones([])) - - # Design upsampling filter. - self.up_factor = int( - np.rint(self.tmp_sampling_rate / self.in_sampling_rate)) - assert self.in_sampling_rate * self.up_factor == self.tmp_sampling_rate - self.up_taps = filter_size * \ - self.up_factor if self.up_factor > 1 and not self.is_torgb else 1 - self.register_buffer('up_filter', self.design_lowpass_filter( - numtaps=self.up_taps, cutoff=self.in_cutoff, width=self.in_half_width*2, fs=self.tmp_sampling_rate)) - - # Design downsampling filter. - self.down_factor = int( - np.rint(self.tmp_sampling_rate / self.out_sampling_rate)) - assert self.out_sampling_rate * self.down_factor == self.tmp_sampling_rate - self.down_taps = filter_size * \ - self.down_factor if self.down_factor > 1 and not self.is_torgb else 1 - self.down_radial = use_radial_filters and not self.is_critically_sampled - self.register_buffer('down_filter', self.design_lowpass_filter( - numtaps=self.down_taps, cutoff=self.out_cutoff, width=self.out_half_width*2, fs=self.tmp_sampling_rate, radial=self.down_radial)) - - # Compute padding. - # Desired output size before downsampling. - pad_total = (self.out_size - 1) * self.down_factor + 1 - # Input size after upsampling. - pad_total -= (self.in_size + self.conv_kernel - 1) * self.up_factor - # Size reduction caused by the filters. - pad_total += self.up_taps + self.down_taps - 2 - # Shift sample locations according to the symmetric interpretation (Appendix C.3). - pad_lo = (pad_total + self.up_factor) // 2 - pad_hi = pad_total - pad_lo - self.padding = [int(pad_lo[0]), int(pad_hi[0]), - int(pad_lo[1]), int(pad_hi[1])] - - def forward(self, x, w, noise_mode='random', force_fp32=False, update_emas=False): - assert noise_mode in ['random', 'const', 'none'] # unused - misc.assert_shape(x, [None, self.in_channels, int( - self.in_size[1]), int(self.in_size[0])]) - misc.assert_shape(w, [x.shape[0], self.w_dim]) - - # Track input magnitude. - if update_emas: - with torch.autograd.profiler.record_function('update_magnitude_ema'): - magnitude_cur = x.detach().to(torch.float32).square().mean() - self.magnitude_ema.copy_(magnitude_cur.lerp( - self.magnitude_ema, self.magnitude_ema_beta)) - input_gain = self.magnitude_ema.rsqrt() - - # Execute affine layer. - styles = self.affine(w) - if self.is_torgb: - weight_gain = 1 / \ - np.sqrt(self.in_channels * (self.conv_kernel ** 2)) - styles = styles * weight_gain - - # Execute modulated conv2d. - dtype = torch.float16 if ( - self.use_fp16 and not force_fp32 and x.device.type == 'cuda') else torch.float32 - x = modulated_conv2d(x=x.to(dtype), w=self.weight, s=styles, - padding=self.conv_kernel-1, demodulate=(not self.is_torgb), input_gain=input_gain) - - # Execute bias, filtered leaky ReLU, and clamping. - gain = 1 if self.is_torgb else np.sqrt(2) - slope = 1 if self.is_torgb else 0.2 - x = filtered_lrelu.filtered_lrelu(x=x, fu=self.up_filter, fd=self.down_filter, b=self.bias.to(x.dtype), - up=self.up_factor, down=self.down_factor, padding=self.padding, gain=gain, slope=slope, clamp=self.conv_clamp) - - # Ensure correct shape and dtype. - misc.assert_shape(x, [None, self.out_channels, int( - self.out_size[1]), int(self.out_size[0])]) - assert x.dtype == dtype - return x - - @staticmethod - def design_lowpass_filter(numtaps, cutoff, width, fs, radial=False): - assert numtaps >= 1 - - # Identity filter. - if numtaps == 1: - return None - - # Separable Kaiser low-pass filter. - if not radial: - f = scipy.signal.firwin( - numtaps=numtaps, cutoff=cutoff, width=width, fs=fs) - return torch.as_tensor(f, dtype=torch.float32) - - # Radially symmetric jinc-based filter. - x = (np.arange(numtaps) - (numtaps - 1) / 2) / fs - r = np.hypot(*np.meshgrid(x, x)) - f = scipy.special.j1(2 * cutoff * (np.pi * r)) / (np.pi * r) - beta = scipy.signal.kaiser_beta( - scipy.signal.kaiser_atten(numtaps, width / (fs / 2))) - w = np.kaiser(numtaps, beta) - f *= np.outer(w, w) - f /= np.sum(f) - return torch.as_tensor(f, dtype=torch.float32) - - def extra_repr(self): - return '\n'.join([ - f'w_dim={self.w_dim:d}, is_torgb={self.is_torgb},', - f'is_critically_sampled={self.is_critically_sampled}, use_fp16={self.use_fp16},', - f'in_sampling_rate={self.in_sampling_rate:g}, out_sampling_rate={self.out_sampling_rate:g},', - f'in_cutoff={self.in_cutoff:g}, out_cutoff={self.out_cutoff:g},', - f'in_half_width={self.in_half_width:g}, out_half_width={self.out_half_width:g},', - f'in_size={list(self.in_size)}, out_size={list(self.out_size)},', - f'in_channels={self.in_channels:d}, out_channels={self.out_channels:d}']) - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class SynthesisNetwork(torch.nn.Module): - def __init__(self, - # Intermediate latent (W) dimensionality. - w_dim, - img_resolution, # Output image resolution. - img_channels, # Number of color channels. - # Overall multiplier for the number of channels. - channel_base=32768, - # Maximum number of channels in any layer. - channel_max=512, - # Total number of layers, excluding Fourier features and ToRGB. - num_layers=14, - # Number of critically sampled layers at the end. - num_critical=2, - # Cutoff frequency of the first layer (f_{c,0}). - first_cutoff=2, - # Minimum stopband of the first layer (f_{t,0}). - first_stopband=2**2.1, - # Minimum stopband of the last layer, expressed relative to the cutoff. - last_stopband_rel=2**0.3, - # Number of additional pixels outside the image. - margin_size=10, - output_scale=0.25, # Scale factor for the output image. - # Use FP16 for the N highest resolutions. - num_fp16_res=4, - # Arguments for SynthesisLayer. - **layer_kwargs, - ): - super().__init__() - self.w_dim = w_dim - self.num_ws = num_layers + 2 - self.img_resolution = img_resolution - self.img_channels = img_channels - self.num_layers = num_layers - self.num_critical = num_critical - self.margin_size = margin_size - self.output_scale = output_scale - self.num_fp16_res = num_fp16_res - - # Geometric progression of layer cutoffs and min. stopbands. - last_cutoff = self.img_resolution / 2 # f_{c,N} - last_stopband = last_cutoff * last_stopband_rel # f_{t,N} - exponents = np.minimum( - np.arange(self.num_layers + 1) / (self.num_layers - self.num_critical), 1) - cutoffs = first_cutoff * \ - (last_cutoff / first_cutoff) ** exponents # f_c[i] - stopbands = first_stopband * \ - (last_stopband / first_stopband) ** exponents # f_t[i] - - # Compute remaining layer parameters. - sampling_rates = np.exp2( - np.ceil(np.log2(np.minimum(stopbands * 2, self.img_resolution)))) # s[i] - half_widths = np.maximum( - stopbands, sampling_rates / 2) - cutoffs # f_h[i] - sizes = sampling_rates + self.margin_size * 2 - sizes[-2:] = self.img_resolution - channels = np.rint(np.minimum( - (channel_base / 2) / cutoffs, channel_max)) - channels[-1] = self.img_channels - - # Construct layers. - self.input = SynthesisInput( - w_dim=self.w_dim, channels=int(channels[0]), size=int(sizes[0]), - sampling_rate=sampling_rates[0], bandwidth=cutoffs[0]) - self.layer_names = [] - for idx in range(self.num_layers + 1): - prev = max(idx - 1, 0) - is_torgb = (idx == self.num_layers) - is_critically_sampled = ( - idx >= self.num_layers - self.num_critical) - use_fp16 = (sampling_rates[idx] * (2 ** - self.num_fp16_res) > self.img_resolution) - layer = SynthesisLayer( - w_dim=self.w_dim, is_torgb=is_torgb, is_critically_sampled=is_critically_sampled, use_fp16=use_fp16, - in_channels=int(channels[prev]), out_channels=int(channels[idx]), - in_size=int(sizes[prev]), out_size=int(sizes[idx]), - in_sampling_rate=int(sampling_rates[prev]), out_sampling_rate=int(sampling_rates[idx]), - in_cutoff=cutoffs[prev], out_cutoff=cutoffs[idx], - in_half_width=half_widths[prev], out_half_width=half_widths[idx], - **layer_kwargs) - name = f'L{idx}_{layer.out_size[0]}_{layer.out_channels}' - setattr(self, name, layer) - self.layer_names.append(name) - - def forward(self, ws, **layer_kwargs): - misc.assert_shape(ws, [None, self.num_ws, self.w_dim]) - ws = ws.to(torch.float32).unbind(dim=1) - - # Execute layers. - x = self.input(ws[0]) - for name, w in zip(self.layer_names, ws[1:]): - x = getattr(self, name)(x, w, **layer_kwargs) - if self.output_scale != 1: - x = x * self.output_scale - - # Ensure correct shape and dtype. - misc.assert_shape(x, [None, self.img_channels, - self.img_resolution, self.img_resolution]) - x = x.to(torch.float32) - return x - - def extra_repr(self): - return '\n'.join([ - f'w_dim={self.w_dim:d}, num_ws={self.num_ws:d},', - f'img_resolution={self.img_resolution:d}, img_channels={self.img_channels:d},', - f'num_layers={self.num_layers:d}, num_critical={self.num_critical:d},', - f'margin_size={self.margin_size:d}, num_fp16_res={self.num_fp16_res:d}']) - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class Generator(torch.nn.Module): - def __init__(self, - z_dim, # Input latent (Z) dimensionality. - # Conditioning label (C) dimensionality. - c_dim, - # Intermediate latent (W) dimensionality. - w_dim, - img_resolution, # Output resolution. - img_channels, # Number of output color channels. - mapping_kwargs={}, # Arguments for MappingNetwork. - resize=None, - **synthesis_kwargs, # Arguments for SynthesisNetwork. - ): - super().__init__() - self.z_dim = z_dim - self.c_dim = c_dim - self.w_dim = w_dim - self.img_resolution = img_resolution - self.img_channels = img_channels - self.synthesis = SynthesisNetwork( - w_dim=w_dim, img_resolution=img_resolution, img_channels=img_channels, **synthesis_kwargs) - self.num_ws = self.synthesis.num_ws - self.mapping = MappingNetwork( - z_dim=z_dim, c_dim=c_dim, w_dim=w_dim, num_ws=self.num_ws, **mapping_kwargs) - self.resize = resize - - def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, update_emas=False, input_is_w=False, **synthesis_kwargs): - if input_is_w: - ws = z - if ws.dim() == 2: - ws = ws.unsqueeze(1).repeat([1, self.mapping.num_ws, 1]) - else: - ws = self.mapping(z, c, truncation_psi=truncation_psi, - truncation_cutoff=truncation_cutoff, update_emas=update_emas) - img = self.synthesis(ws, update_emas=update_emas, **synthesis_kwargs) - if self.resize is not None: - img = imresize(img, [self.resize, self.resize]) - return img - -# ---------------------------------------------------------------------------- - - -def imresize(image, size): - dim = image.dim() - if dim == 3: - image = image.unsqueeze(1) - b, _, h, w = image.shape - if size[0] > h: - image = F.interpolate(image, size, mode='bilinear') - elif size[0] < h: - image = F.interpolate(image, size, mode='area') - if dim == 3: - image = image.squeeze(1) - return image diff --git a/spaces/Egrt/LicenseGAN/README.md b/spaces/Egrt/LicenseGAN/README.md deleted file mode 100644 index da545571fcbeef92ed72116c9af67cdae94d5ab4..0000000000000000000000000000000000000000 --- a/spaces/Egrt/LicenseGAN/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: LicenseGAN -emoji: 📉 -colorFrom: yellow -colorTo: gray -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/Enderfga/mtCNN_sysu/test.py b/spaces/Enderfga/mtCNN_sysu/test.py deleted file mode 100644 index 022f537faf93c105dd15b91a9a5dd7deda07eeb3..0000000000000000000000000000000000000000 --- a/spaces/Enderfga/mtCNN_sysu/test.py +++ /dev/null @@ -1,84 +0,0 @@ -import cv2 -from utils.detect import create_mtcnn_net, MtcnnDetector -from utils.vision import vis_face -import argparse - - -MIN_FACE_SIZE = 3 - -def parse_args(): - parser = argparse.ArgumentParser(description='Test MTCNN', - formatter_class=argparse.ArgumentDefaultsHelpFormatter) - - parser.add_argument('--net', default='onet', help='which net to show', type=str) - parser.add_argument('--pnet_path', default="./model_store/pnet_epoch_20.pt",help='path to pnet model', type=str) - parser.add_argument('--rnet_path', default="./model_store/rnet_epoch_20.pt",help='path to rnet model', type=str) - parser.add_argument('--onet_path', default="./model_store/onet_epoch_20.pt",help='path to onet model', type=str) - parser.add_argument('--path', default="./img/mid.png",help='path to image', type=str) - parser.add_argument('--min_face_size', default=MIN_FACE_SIZE,help='min face size', type=int) - parser.add_argument('--use_cuda', default=False,help='use cuda', type=bool) - parser.add_argument('--thresh', default='[0.1, 0.1, 0.1]',help='thresh', type=str) - parser.add_argument('--save_name', default="result.jpg",help='save name', type=str) - parser.add_argument('--input_mode', default=1,help='image or video', type=int) - args = parser.parse_args() - return args -if __name__ == '__main__': - args = parse_args() - thresh = [float(i) for i in (args.thresh).split('[')[1].split(']')[0].split(',')] - pnet, rnet, onet = create_mtcnn_net(p_model_path=args.pnet_path, r_model_path=args.rnet_path,o_model_path=args.onet_path, use_cuda=args.use_cuda) - mtcnn_detector = MtcnnDetector(pnet=pnet, rnet=rnet, onet=onet, min_face_size=args.min_face_size,threshold=thresh) - if args.input_mode == 1: - img = cv2.imread(args.path) - img_bg = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - p_bboxs, r_bboxs, bboxs, landmarks = mtcnn_detector.detect_face(img) - # print box_align - save_name = args.save_name - if args.net == 'pnet': - vis_face(img_bg, p_bboxs, landmarks, MIN_FACE_SIZE, save_name) - elif args.net == 'rnet': - vis_face(img_bg, r_bboxs, landmarks, MIN_FACE_SIZE, save_name) - elif args.net == 'onet': - vis_face(img_bg, bboxs, landmarks, MIN_FACE_SIZE, save_name) - elif args.input_mode == 0: - cap=cv2.VideoCapture(0) - fourcc = cv2.VideoWriter_fourcc(*'XVID') - out = cv2.VideoWriter('out.mp4' ,fourcc,10,(640,480)) - while True: - t1=cv2.getTickCount() - ret,frame = cap.read() - if ret == True: - boxes_c,landmarks = mtcnn_detector.detect_face(frame) - t2=cv2.getTickCount() - t=(t2-t1)/cv2.getTickFrequency() - fps=1.0/t - for i in range(boxes_c.shape[0]): - bbox = boxes_c[i, :4] - score = boxes_c[i, 4] - corpbbox = [int(bbox[0]), int(bbox[1]), int(bbox[2]), int(bbox[3])] - - #画人脸框 - cv2.rectangle(frame, (corpbbox[0], corpbbox[1]), - (corpbbox[2], corpbbox[3]), (255, 0, 0), 1) - #画置信度 - cv2.putText(frame, '{:.2f}'.format(score), - (corpbbox[0], corpbbox[1] - 2), - cv2.FONT_HERSHEY_SIMPLEX, - 0.5,(0, 0, 255), 2) - #画fps值 - cv2.putText(frame, '{:.4f}'.format(t) + " " + '{:.3f}'.format(fps), (10, 20), - cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 255), 2) - #画关键点 - for i in range(landmarks.shape[0]): - for j in range(len(landmarks[i])//2): - cv2.circle(frame, (int(landmarks[i][2*j]),int(int(landmarks[i][2*j+1]))), 2, (0,0,255)) - a = out.write(frame) - cv2.imshow("result", frame) - if cv2.waitKey(1) & 0xFF == ord('q'): - break - else: - break - cap.release() - out.release() - cv2.destroyAllWindows() - - diff --git a/spaces/Ernar246/OpenAI-Reverse-Proxy/server.js b/spaces/Ernar246/OpenAI-Reverse-Proxy/server.js deleted file mode 100644 index 04a48b7a429c4d0ad0b772ba1edf503e349eda21..0000000000000000000000000000000000000000 --- a/spaces/Ernar246/OpenAI-Reverse-Proxy/server.js +++ /dev/null @@ -1,32 +0,0 @@ -const express = require('express'); -const proxy = require('express-http-proxy'); -const app = express(); -const targetUrl = 'https://api.openai.com'; -const openaiKey = process.env.OPENAI_KEY -const port = 7860; -const baseUrl = getExternalUrl(process.env.SPACE_ID); - -app.use('/api', proxy(targetUrl, { - proxyReqOptDecorator: (proxyReqOpts, srcReq) => { - // Modify the request headers if necessary - proxyReqOpts.headers['Authorization'] = 'Bearer '+openaiKey; - return proxyReqOpts; - }, -})); - -app.get("/", (req, res) => { - res.send(`This is your OpenAI Reverse Proxy URL: ${baseUrl}`); -}); - -function getExternalUrl(spaceId) { - try { - const [username, spacename] = spaceId.split("/"); - return `https://${username}-${spacename.replace(/_/g, "-")}.hf.space/api/v1`; - } catch (e) { - return ""; - } -} - -app.listen(port, () => { - console.log(`Reverse proxy server running on ${baseUrl}`); -}); \ No newline at end of file diff --git a/spaces/EstebanDC/Compression_Index/README.md b/spaces/EstebanDC/Compression_Index/README.md deleted file mode 100644 index 19a90b961d2a9ecd2c8adc1d2113c7f6bd5647cf..0000000000000000000000000000000000000000 --- a/spaces/EstebanDC/Compression_Index/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Compression Index -emoji: 🔥 -colorFrom: purple -colorTo: blue -sdk: gradio -sdk_version: 3.28.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/EuroPython2022/mmocr-demo/README.md b/spaces/EuroPython2022/mmocr-demo/README.md deleted file mode 100644 index e727b2f6ce4c8430bb7844df00598902dd762876..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/mmocr-demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Mmocr Demo -emoji: 📊 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.0.25 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/FYP-23-S1-21/Refineverse_Plugin/templates/TranslationTable.html b/spaces/FYP-23-S1-21/Refineverse_Plugin/templates/TranslationTable.html deleted file mode 100644 index 1342b93f7186913b2d1874ea79fae1b40aea6ecf..0000000000000000000000000000000000000000 --- a/spaces/FYP-23-S1-21/Refineverse_Plugin/templates/TranslationTable.html +++ /dev/null @@ -1,96 +0,0 @@ - - - - Translation Data - - - - - - - - - - - - - {% for translation in translations %} - - - - - {% endfor %} - -
    User StoryTranslation
    {{ translation[0] }}{{ translation[1] }}
    - - - - \ No newline at end of file diff --git a/spaces/FabioZe/WizardLM-WizardCoder-15B-V1.0/app.py b/spaces/FabioZe/WizardLM-WizardCoder-15B-V1.0/app.py deleted file mode 100644 index b950a0dc3c9037b8db001411736515bf668d4f57..0000000000000000000000000000000000000000 --- a/spaces/FabioZe/WizardLM-WizardCoder-15B-V1.0/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/WizardLM/WizardCoder-15B-V1.0").launch() \ No newline at end of file diff --git a/spaces/FoxMeo/fire-detector/utils/wandb_logging/__init__.py b/spaces/FoxMeo/fire-detector/utils/wandb_logging/__init__.py deleted file mode 100644 index 84952a8167bc2975913a6def6b4f027d566552a9..0000000000000000000000000000000000000000 --- a/spaces/FoxMeo/fire-detector/utils/wandb_logging/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# init \ No newline at end of file diff --git a/spaces/FridaZuley/RVC_HFKawaii/infer/modules/uvr5/mdxnet.py b/spaces/FridaZuley/RVC_HFKawaii/infer/modules/uvr5/mdxnet.py deleted file mode 100644 index 86a066893ad99cfed77788027a9deb8ed486a7f2..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/infer/modules/uvr5/mdxnet.py +++ /dev/null @@ -1,246 +0,0 @@ -import os -import logging - -logger = logging.getLogger(__name__) - -import librosa -import numpy as np -import soundfile as sf -import torch -from tqdm import tqdm - -cpu = torch.device("cpu") - - -class ConvTDFNetTrim: - def __init__( - self, device, model_name, target_name, L, dim_f, dim_t, n_fft, hop=1024 - ): - super(ConvTDFNetTrim, self).__init__() - - self.dim_f = dim_f - self.dim_t = 2**dim_t - self.n_fft = n_fft - self.hop = hop - self.n_bins = self.n_fft // 2 + 1 - self.chunk_size = hop * (self.dim_t - 1) - self.window = torch.hann_window(window_length=self.n_fft, periodic=True).to( - device - ) - self.target_name = target_name - self.blender = "blender" in model_name - - self.dim_c = 4 - out_c = self.dim_c * 4 if target_name == "*" else self.dim_c - self.freq_pad = torch.zeros( - [1, out_c, self.n_bins - self.dim_f, self.dim_t] - ).to(device) - - self.n = L // 2 - - def stft(self, x): - x = x.reshape([-1, self.chunk_size]) - x = torch.stft( - x, - n_fft=self.n_fft, - hop_length=self.hop, - window=self.window, - center=True, - return_complex=True, - ) - x = torch.view_as_real(x) - x = x.permute([0, 3, 1, 2]) - x = x.reshape([-1, 2, 2, self.n_bins, self.dim_t]).reshape( - [-1, self.dim_c, self.n_bins, self.dim_t] - ) - return x[:, :, : self.dim_f] - - def istft(self, x, freq_pad=None): - freq_pad = ( - self.freq_pad.repeat([x.shape[0], 1, 1, 1]) - if freq_pad is None - else freq_pad - ) - x = torch.cat([x, freq_pad], -2) - c = 4 * 2 if self.target_name == "*" else 2 - x = x.reshape([-1, c, 2, self.n_bins, self.dim_t]).reshape( - [-1, 2, self.n_bins, self.dim_t] - ) - x = x.permute([0, 2, 3, 1]) - x = x.contiguous() - x = torch.view_as_complex(x) - x = torch.istft( - x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True - ) - return x.reshape([-1, c, self.chunk_size]) - - -def get_models(device, dim_f, dim_t, n_fft): - return ConvTDFNetTrim( - device=device, - model_name="Conv-TDF", - target_name="vocals", - L=11, - dim_f=dim_f, - dim_t=dim_t, - n_fft=n_fft, - ) - - -class Predictor: - def __init__(self, args): - import onnxruntime as ort - - logger.info(ort.get_available_providers()) - self.args = args - self.model_ = get_models( - device=cpu, dim_f=args.dim_f, dim_t=args.dim_t, n_fft=args.n_fft - ) - self.model = ort.InferenceSession( - os.path.join(args.onnx, self.model_.target_name + ".onnx"), - providers=[ - "CUDAExecutionProvider", - "DmlExecutionProvider", - "CPUExecutionProvider", - ], - ) - logger.info("ONNX load done") - - def demix(self, mix): - samples = mix.shape[-1] - margin = self.args.margin - chunk_size = self.args.chunks * 44100 - assert not margin == 0, "margin cannot be zero!" - if margin > chunk_size: - margin = chunk_size - - segmented_mix = {} - - if self.args.chunks == 0 or samples < chunk_size: - chunk_size = samples - - counter = -1 - for skip in range(0, samples, chunk_size): - counter += 1 - - s_margin = 0 if counter == 0 else margin - end = min(skip + chunk_size + margin, samples) - - start = skip - s_margin - - segmented_mix[skip] = mix[:, start:end].copy() - if end == samples: - break - - sources = self.demix_base(segmented_mix, margin_size=margin) - """ - mix:(2,big_sample) - segmented_mix:offset->(2,small_sample) - sources:(1,2,big_sample) - """ - return sources - - def demix_base(self, mixes, margin_size): - chunked_sources = [] - progress_bar = tqdm(total=len(mixes)) - progress_bar.set_description("Processing") - for mix in mixes: - cmix = mixes[mix] - sources = [] - n_sample = cmix.shape[1] - model = self.model_ - trim = model.n_fft // 2 - gen_size = model.chunk_size - 2 * trim - pad = gen_size - n_sample % gen_size - mix_p = np.concatenate( - (np.zeros((2, trim)), cmix, np.zeros((2, pad)), np.zeros((2, trim))), 1 - ) - mix_waves = [] - i = 0 - while i < n_sample + pad: - waves = np.array(mix_p[:, i : i + model.chunk_size]) - mix_waves.append(waves) - i += gen_size - mix_waves = torch.tensor(mix_waves, dtype=torch.float32).to(cpu) - with torch.no_grad(): - _ort = self.model - spek = model.stft(mix_waves) - if self.args.denoise: - spec_pred = ( - -_ort.run(None, {"input": -spek.cpu().numpy()})[0] * 0.5 - + _ort.run(None, {"input": spek.cpu().numpy()})[0] * 0.5 - ) - tar_waves = model.istft(torch.tensor(spec_pred)) - else: - tar_waves = model.istft( - torch.tensor(_ort.run(None, {"input": spek.cpu().numpy()})[0]) - ) - tar_signal = ( - tar_waves[:, :, trim:-trim] - .transpose(0, 1) - .reshape(2, -1) - .numpy()[:, :-pad] - ) - - start = 0 if mix == 0 else margin_size - end = None if mix == list(mixes.keys())[::-1][0] else -margin_size - if margin_size == 0: - end = None - sources.append(tar_signal[:, start:end]) - - progress_bar.update(1) - - chunked_sources.append(sources) - _sources = np.concatenate(chunked_sources, axis=-1) - # del self.model - progress_bar.close() - return _sources - - def prediction(self, m, vocal_root, others_root, format): - os.makedirs(vocal_root, exist_ok=True) - os.makedirs(others_root, exist_ok=True) - basename = os.path.basename(m) - mix, rate = librosa.load(m, mono=False, sr=44100) - if mix.ndim == 1: - mix = np.asfortranarray([mix, mix]) - mix = mix.T - sources = self.demix(mix.T) - opt = sources[0].T - if format in ["wav", "flac"]: - sf.write( - "%s/%s_main_vocal.%s" % (vocal_root, basename, format), mix - opt, rate - ) - sf.write("%s/%s_others.%s" % (others_root, basename, format), opt, rate) - else: - path_vocal = "%s/%s_main_vocal.wav" % (vocal_root, basename) - path_other = "%s/%s_others.wav" % (others_root, basename) - sf.write(path_vocal, mix - opt, rate) - sf.write(path_other, opt, rate) - if os.path.exists(path_vocal): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path_vocal, path_vocal[:-4] + ".%s" % format) - ) - if os.path.exists(path_other): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path_other, path_other[:-4] + ".%s" % format) - ) - - -class MDXNetDereverb: - def __init__(self, chunks, device): - self.onnx = "assets/uvr5_weights/onnx_dereverb_By_FoxJoy" - self.shifts = 10 # 'Predict with randomised equivariant stabilisation' - self.mixing = "min_mag" # ['default','min_mag','max_mag'] - self.chunks = chunks - self.margin = 44100 - self.dim_t = 9 - self.dim_f = 3072 - self.n_fft = 6144 - self.denoise = True - self.pred = Predictor(self) - self.device = device - - def path_audio(self, input, vocal_root, others_root, format): - self.pred.prediction(input, vocal_root, others_root, format) diff --git a/spaces/GAIR/Factool/factool/tasks.py b/spaces/GAIR/Factool/factool/tasks.py deleted file mode 100644 index 71082c936729a53e180f75e4e1581ce23391978c..0000000000000000000000000000000000000000 --- a/spaces/GAIR/Factool/factool/tasks.py +++ /dev/null @@ -1,21 +0,0 @@ -"""Definition of different types of tasks.""" -from __future__ import annotations - -from enum import Enum - - -class TaskType(str, Enum): - """Task types available in this tool.""" - - kbqa = "kbqa" - math = "math" - code = "code" - sci = "sci" - - @staticmethod - def list() -> list[str]: - """Obtains string representations of all values. - Returns: - List of all values in str. - """ - return list(map(lambda c: c.value, TaskType)) \ No newline at end of file diff --git a/spaces/GeorgeOrville/bingo/src/lib/storage.ts b/spaces/GeorgeOrville/bingo/src/lib/storage.ts deleted file mode 100644 index a5b7825c4f76a28c704da512ae39e8bb45addd09..0000000000000000000000000000000000000000 --- a/spaces/GeorgeOrville/bingo/src/lib/storage.ts +++ /dev/null @@ -1,27 +0,0 @@ -import { getMany, set, del, clear } from 'idb-keyval'; - -export const Storage = { - async get(key: string | string[] | null): Promise { - if (key === null) return null; - if (typeof key === 'string') { - key = [key] - } - const returnData: Record = {} - const values = await getMany(key) - key.forEach((k, idx)=> { - returnData[k] = values[idx] - }) - return returnData; - }, - async set(object: any) { - for (let key of Object.keys(object)) { - await set(key, object[key]) - } - }, - async remove(key: string) { - return del(key); - }, - async clear() { - return clear(); - } -} diff --git a/spaces/Gopal101/Netflix-Data-Analytics/app.py b/spaces/Gopal101/Netflix-Data-Analytics/app.py deleted file mode 100644 index f262ff907869e4598f27d64b71aecb378bd1f4b7..0000000000000000000000000000000000000000 --- a/spaces/Gopal101/Netflix-Data-Analytics/app.py +++ /dev/null @@ -1,104 +0,0 @@ -import numpy as np -import pandas as pd -import plotly.express as px -import matplotlib.pyplot as plt -import plotly.graph_objects as go -import streamlit as st -import missingno as msno -import seaborn as sns - -st.set_page_config(layout='wide') - -def clean_duration_count(value): - if isinstance(value,float): - return value - value_parts = value.split() - duration_time = value_parts[0].replace(',','') - return duration_time - -def clean_dataset(df): - df['duration'] = df['duration'].apply(clean_duration_count).astype(float) - return df - -def load_dataset(): - df = pd.read_csv('data/netflix.csv') - df = clean_dataset(df) - return df - -st.title("Netflix data shown in visuals format") -with st.spinner("loading dataset"): - print("D") - df=load_dataset() - - -movie = df[df['type'] == 'Movie'] -tv_show = df[df['type'] == 'TV Show'] - -view_options = ["Dont Show",'Show All','Only Movies','Only Shows'] -view_choice = st.sidebar.radio("Select View", view_options) -if view_choice == view_options[0]: - st.info("View Dataset on the sidebar") -elif view_choice == view_options[1]: - st.write(df) -elif view_choice == view_options[2]: - st.write(movie) -elif view_choice == view_options[3]: - st.write(tv_show) - -st.header("Yearly release") -all_year_count=df["release_year"].value_counts().reset_index() -fig = px.bar(all_year_count, 'index', 'release_year', title="No of releases respect to years") -movie_year_count=movie["release_year"].value_counts().reset_index() -st.write(movie_year_count) -fig2 = px.bar(movie_year_count, 'index', 'release_year', title="No of Movies releasae respect to years", log_y=True) -show_year_count=tv_show["release_year"].value_counts().reset_index() -fig3 = px.bar(show_year_count, 'index', 'release_year', title="No of shows releasae respect to years",log_y=True) -st.plotly_chart(fig, use_container_width=True) -st.plotly_chart(fig2, use_container_width=True) -st.plotly_chart(fig3, use_container_width=True) - -fig=plt.figure(figsize=(2,4)) -sns.countplot(data=df ,x='type') - -st.plotly_chart(fig) - -director_name=df['director'].value_counts().reset_index() -fig=px.area(director_name,'index','director',title="NO of production count respect to director",color='director') -st.plotly_chart(fig, use_container_width=True) - -df_countries = df['country'].value_counts().reset_index() -fig=px.funnel_area(df_countries.head(n=10),'index','country',title="Top 10 countries have production with count") -st.plotly_chart(fig, use_container_width=True) - -movie.sort_values('duration',inplace=True, ascending=False) -fig = px.bar(movie, 'title', 'duration', title="Duration of movies", color='release_year') -st.plotly_chart(fig, use_container_width=True) - -cast_count=df['cast'].value_counts().head(10) -fig=px.bar(cast_count,title='Top 10 most castings') -st.plotly_chart(fig, use_container_width=True) - -release=df['release_year'].value_counts().head(10) -fig=px.bar(release,title='Top 10 years have more production') -st.plotly_chart(fig, use_container_width=True) - -tv_show.value_counts() -fig=px.line(tv_show.head(10),'title','duration',title='Top 10 TV shows have maximum length') -st.plotly_chart(fig, use_container_width=True,) - -movie.value_counts() -fig=px.line(movie.head(10),'title','duration',title='10 Movies have maximum length') -st.plotly_chart(fig, use_container_width=True) - -india_df = df[df['country'] =='India'].copy() -fig=px.scatter(india_df.head(n=50),'title', 'director', title='Top 50 Movies released in India per year 1900-2022',color_discrete_sequence=["red", "green", "blue", "goldenrod", "magenta"],) -st.plotly_chart(fig, use_container_width=True) - -rating_counts=df["rating"].value_counts().reset_index() -fig=px.bar(rating_counts, 'index', 'rating', title="Ratings counts in netflix",) -st.plotly_chart(fig, use_container_width=True) - -listed_count=df['listed_in'].value_counts() -fig=px.funnel(listed_count.head(n=10),title='Top 10 category listed' ) -st.plotly_chart(fig, use_container_width=True) - diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/albu_example/mask_rcnn_r50_fpn_albu_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/albu_example/mask_rcnn_r50_fpn_albu_1x_coco.py deleted file mode 100644 index b3f879a6c573871ea17b2bf158173aadf14457b6..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/albu_example/mask_rcnn_r50_fpn_albu_1x_coco.py +++ /dev/null @@ -1,73 +0,0 @@ -_base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -albu_train_transforms = [ - dict( - type='ShiftScaleRotate', - shift_limit=0.0625, - scale_limit=0.0, - rotate_limit=0, - interpolation=1, - p=0.5), - dict( - type='RandomBrightnessContrast', - brightness_limit=[0.1, 0.3], - contrast_limit=[0.1, 0.3], - p=0.2), - dict( - type='OneOf', - transforms=[ - dict( - type='RGBShift', - r_shift_limit=10, - g_shift_limit=10, - b_shift_limit=10, - p=1.0), - dict( - type='HueSaturationValue', - hue_shift_limit=20, - sat_shift_limit=30, - val_shift_limit=20, - p=1.0) - ], - p=0.1), - dict(type='JpegCompression', quality_lower=85, quality_upper=95, p=0.2), - dict(type='ChannelShuffle', p=0.1), - dict( - type='OneOf', - transforms=[ - dict(type='Blur', blur_limit=3, p=1.0), - dict(type='MedianBlur', blur_limit=3, p=1.0) - ], - p=0.1), -] -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='Pad', size_divisor=32), - dict( - type='Albu', - transforms=albu_train_transforms, - bbox_params=dict( - type='BboxParams', - format='pascal_voc', - label_fields=['gt_labels'], - min_visibility=0.0, - filter_lost_elements=True), - keymap={ - 'img': 'image', - 'gt_masks': 'masks', - 'gt_bboxes': 'bboxes' - }, - update_pad_shape=False, - skip_img_without_anno=True), - dict(type='Normalize', **img_norm_cfg), - dict(type='DefaultFormatBundle'), - dict( - type='Collect', - keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks'], - meta_keys=('filename', 'ori_shape', 'img_shape', 'img_norm_cfg', - 'pad_shape', 'scale_factor')) -] -data = dict(train=dict(pipeline=train_pipeline)) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn/mask_rcnn_r50_fpn_gn-all_3x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn/mask_rcnn_r50_fpn_gn-all_3x_coco.py deleted file mode 100644 index f9177196cb91c6bbc6dd4383837819f053b334bb..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn/mask_rcnn_r50_fpn_gn-all_3x_coco.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = './mask_rcnn_r50_fpn_gn-all_2x_coco.py' - -# learning policy -lr_config = dict(step=[28, 34]) -runner = dict(type='EpochBasedRunner', max_epochs=36) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_d6_r101b-d16_512x1024_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_d6_r101b-d16_512x1024_80k_cityscapes.py deleted file mode 100644 index af3f765b76e7269d22c8f362e1d41f03d1efaf93..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_d6_r101b-d16_512x1024_80k_cityscapes.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './fcn_d6_r50b-d16_512x1024_80k_cityscapes.py' -model = dict( - pretrained='torchvision://resnet101', - backbone=dict(type='ResNet', depth=101)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ocrnet/ocrnet_r101-d8_512x1024_80k_b16_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ocrnet/ocrnet_r101-d8_512x1024_80k_b16_cityscapes.py deleted file mode 100644 index 33d96c76f68b92217ed38afe9538144dfedc4fd2..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ocrnet/ocrnet_r101-d8_512x1024_80k_b16_cityscapes.py +++ /dev/null @@ -1,7 +0,0 @@ -_base_ = [ - '../_base_/models/ocrnet_r50-d8.py', '../_base_/datasets/cityscapes.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py' -] -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) -optimizer = dict(lr=0.02) -lr_config = dict(min_lr=2e-4) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/apis/train.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/apis/train.py deleted file mode 100644 index b15abc6f6f806537e1d335f58615a0b4749ac5ab..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/apis/train.py +++ /dev/null @@ -1,116 +0,0 @@ -import random -import warnings - -import numpy as np -import torch -from mmcv.parallel import MMDataParallel, MMDistributedDataParallel -from mmcv.runner import build_optimizer, build_runner - -from mmseg.core import DistEvalHook, EvalHook -from mmseg.datasets import build_dataloader, build_dataset -from mmseg.utils import get_root_logger - - -def set_random_seed(seed, deterministic=False): - """Set random seed. - - Args: - seed (int): Seed to be used. - deterministic (bool): Whether to set the deterministic option for - CUDNN backend, i.e., set `torch.backends.cudnn.deterministic` - to True and `torch.backends.cudnn.benchmark` to False. - Default: False. - """ - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - if deterministic: - torch.backends.cudnn.deterministic = True - torch.backends.cudnn.benchmark = False - - -def train_segmentor(model, - dataset, - cfg, - distributed=False, - validate=False, - timestamp=None, - meta=None): - """Launch segmentor training.""" - logger = get_root_logger(cfg.log_level) - - # prepare data loaders - dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset] - data_loaders = [ - build_dataloader( - ds, - cfg.data.samples_per_gpu, - cfg.data.workers_per_gpu, - # cfg.gpus will be ignored if distributed - len(cfg.gpu_ids), - dist=distributed, - seed=cfg.seed, - drop_last=True) for ds in dataset - ] - - # put model on gpus - if distributed: - find_unused_parameters = cfg.get('find_unused_parameters', False) - # Sets the `find_unused_parameters` parameter in - # torch.nn.parallel.DistributedDataParallel - model = MMDistributedDataParallel( - model.cuda(), - device_ids=[torch.cuda.current_device()], - broadcast_buffers=False, - find_unused_parameters=find_unused_parameters) - else: - model = MMDataParallel( - model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids) - - # build runner - optimizer = build_optimizer(model, cfg.optimizer) - - if cfg.get('runner') is None: - cfg.runner = {'type': 'IterBasedRunner', 'max_iters': cfg.total_iters} - warnings.warn( - 'config is now expected to have a `runner` section, ' - 'please set `runner` in your config.', UserWarning) - - runner = build_runner( - cfg.runner, - default_args=dict( - model=model, - batch_processor=None, - optimizer=optimizer, - work_dir=cfg.work_dir, - logger=logger, - meta=meta)) - - # register hooks - runner.register_training_hooks(cfg.lr_config, cfg.optimizer_config, - cfg.checkpoint_config, cfg.log_config, - cfg.get('momentum_config', None)) - - # an ugly walkaround to make the .log and .log.json filenames the same - runner.timestamp = timestamp - - # register eval hooks - if validate: - val_dataset = build_dataset(cfg.data.val, dict(test_mode=True)) - val_dataloader = build_dataloader( - val_dataset, - samples_per_gpu=1, - workers_per_gpu=cfg.data.workers_per_gpu, - dist=distributed, - shuffle=False) - eval_cfg = cfg.get('evaluation', {}) - eval_cfg['by_epoch'] = cfg.runner['type'] != 'IterBasedRunner' - eval_hook = DistEvalHook if distributed else EvalHook - runner.register_hook(eval_hook(val_dataloader, **eval_cfg), priority='LOW') - - if cfg.resume_from: - runner.resume(cfg.resume_from) - elif cfg.load_from: - runner.load_checkpoint(cfg.load_from) - runner.run(data_loaders, cfg.workflow) diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/SPNet/README.md b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/SPNet/README.md deleted file mode 100644 index 1c3130890260c37bd46ecc63cf4a9f0653c47fe9..0000000000000000000000000000000000000000 --- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/SPNet/README.md +++ /dev/null @@ -1,2 +0,0 @@ -# Reference -Cloned from [SPNet Github](https://github.com/taozh2017/SPNet) \ No newline at end of file diff --git a/spaces/HUBioDataLab/DrugGEN/data/dataset_download.sh b/spaces/HUBioDataLab/DrugGEN/data/dataset_download.sh deleted file mode 100644 index cbe55c1385df1f435e811902d4c5303b1c527d9f..0000000000000000000000000000000000000000 --- a/spaces/HUBioDataLab/DrugGEN/data/dataset_download.sh +++ /dev/null @@ -1,6 +0,0 @@ -#!/bin/sh -pip install gdown - -gdown --fuzzy "https://drive.google.com/file/d/1kDpTm36X3ugpr6Ooo4Fg_dkNRZhQ5EMC/view?usp=share_link" - -gdown --fuzzy "https://drive.google.com/file/d/13h465yaIbrAp5tcGbIwhxriorejr6Fsz/view?usp=share_link" diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/multilingual_fairseq_gen.sh b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/multilingual_fairseq_gen.sh deleted file mode 100644 index 65aa322d7daaa428015de98abe4664a6a4164bfd..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/multilingual_fairseq_gen.sh +++ /dev/null @@ -1,26 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -lang_pairs="en-fr,en-cs,fr-en,cs-en" -path_2_data=$1 # -lang_list=$2 # -model=$3 # -source_lang=cs -target_lang=en - -fairseq-generate "$path_2_data" \ - --path "$model" \ - --task translation_multi_simple_epoch \ - --gen-subset test \ - --source-lang "$source_lang" \ - --target-lang "$target_lang" \ - --sacrebleu --remove-bpe 'sentencepiece'\ - --batch-size 32 \ - --encoder-langtok "src" \ - --decoder-langtok \ - --lang-dict "$lang_list" \ - --lang-pairs "$lang_pairs" diff --git a/spaces/Hellisotherpeople/Reassuring_parables/README.md b/spaces/Hellisotherpeople/Reassuring_parables/README.md deleted file mode 100644 index 0b220b188f3556d40a2df5f764695b416890e573..0000000000000000000000000000000000000000 --- a/spaces/Hellisotherpeople/Reassuring_parables/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Reassuring_parables -emoji: 😖 ➡️ 🤗 -colorFrom: pink -colorTo: gray -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/HighCWu/GPEN/face_model/model.py b/spaces/HighCWu/GPEN/face_model/model.py deleted file mode 100644 index fe54c123c026b8c0af6fac39815afaea0a7017a4..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/GPEN/face_model/model.py +++ /dev/null @@ -1,818 +0,0 @@ -''' -@paper: GAN Prior Embedded Network for Blind Face Restoration in the Wild (CVPR2021) -@author: yangxy (yangtao9009@gmail.com) -''' -import math -import random -import functools -import operator -import itertools - -import torch -from torch import nn -from torch.nn import functional as F -from torch.autograd import Function - -from op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - - if k.ndim == 1: - k = k[None, :] * k[:, None] - - k /= k.sum() - - return k - - -class Upsample(nn.Module): - def __init__(self, kernel, factor=2, device='cpu'): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor ** 2) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - - self.pad = (pad0, pad1) - self.device = device - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad, device=self.device) - - return out - - -class Downsample(nn.Module): - def __init__(self, kernel, factor=2, device='cpu'): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.pad = (pad0, pad1) - self.device = device - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad, device=self.device) - - return out - - -class Blur(nn.Module): - def __init__(self, kernel, pad, upsample_factor=1, device='cpu'): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor ** 2) - - self.register_buffer('kernel', kernel) - - self.pad = pad - self.device = device - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad, device=self.device) - - return out - - -class EqualConv2d(nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True - ): - super().__init__() - - self.weight = nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) - ) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - - self.stride = stride - self.padding = padding - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - out = F.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},' - f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})' - ) - - -class EqualLinear(nn.Module): - def __init__( - self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None, device='cpu' - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - - else: - self.bias = None - - self.activation = activation - self.device = device - - self.scale = (1 / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - if self.activation: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, self.bias * self.lr_mul, device=self.device) - - else: - out = F.linear(input, self.weight * self.scale, bias=self.bias * self.lr_mul) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})' - ) - - -class ScaledLeakyReLU(nn.Module): - def __init__(self, negative_slope=0.2): - super().__init__() - - self.negative_slope = negative_slope - - def forward(self, input): - out = F.leaky_relu(input, negative_slope=self.negative_slope) - - return out * math.sqrt(2) - - -class ModulatedConv2d(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - demodulate=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - device='cpu' - ): - super().__init__() - - self.eps = 1e-8 - self.kernel_size = kernel_size - self.in_channel = in_channel - self.out_channel = out_channel - self.upsample = upsample - self.downsample = downsample - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor, device=device) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1), device=device) - - fan_in = in_channel * kernel_size ** 2 - self.scale = 1 / math.sqrt(fan_in) - self.padding = kernel_size // 2 - - self.weight = nn.Parameter( - torch.randn(1, out_channel, in_channel, kernel_size, kernel_size) - ) - - self.modulation = EqualLinear(style_dim, in_channel, bias_init=1) - - self.demodulate = demodulate - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, ' - f'upsample={self.upsample}, downsample={self.downsample})' - ) - - def forward(self, input, style): - batch, in_channel, height, width = input.shape - - style = self.modulation(style).view(batch, 1, in_channel, 1, 1) - weight = self.scale * self.weight * style - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8) - weight = weight * demod.view(batch, self.out_channel, 1, 1, 1) - - weight = weight.view( - batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - - if self.upsample: - input = input.view(1, batch * in_channel, height, width) - weight = weight.view( - batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - weight = weight.transpose(1, 2).reshape( - batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size - ) - out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - _, _, height, width = input.shape - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - else: - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=self.padding, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - return out - - -class NoiseInjection(nn.Module): - def __init__(self, isconcat=True): - super().__init__() - - self.isconcat = isconcat - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, image, noise=None): - if noise==None: - batch, channel, height, width = image.shape - noise = image.new_empty(batch, channel, height, width).normal_() - - if self.isconcat: - return torch.cat((image, self.weight * noise), dim=1) - else: - return image + self.weight * noise - - -class ConstantInput(nn.Module): - def __init__(self, channel, size=4): - super().__init__() - - self.input = nn.Parameter(torch.randn(1, channel, size, size)) - - def forward(self, input): - batch = input.shape[0] - out = self.input.repeat(batch, 1, 1, 1) - - return out - - -class StyledConv(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=False, - blur_kernel=[1, 3, 3, 1], - demodulate=True, - isconcat=True, - device='cpu' - ): - super().__init__() - - self.conv = ModulatedConv2d( - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=upsample, - blur_kernel=blur_kernel, - demodulate=demodulate, - device=device - ) - - self.noise = NoiseInjection(isconcat) - #self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1)) - #self.activate = ScaledLeakyReLU(0.2) - feat_multiplier = 2 if isconcat else 1 - self.activate = FusedLeakyReLU(out_channel*feat_multiplier, device=device) - - def forward(self, input, style, noise=None): - out = self.conv(input, style) - out = self.noise(out, noise=noise) - # out = out + self.bias - out = self.activate(out) - - return out - - -class ToRGB(nn.Module): - def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1], device='cpu'): - super().__init__() - - if upsample: - self.upsample = Upsample(blur_kernel, device=device) - - self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False, device=device) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, input, style, skip=None): - out = self.conv(input, style) - out = out + self.bias - - if skip is not None: - skip = self.upsample(skip) - - out = out + skip - - return out - -class Generator(nn.Module): - def __init__( - self, - size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - isconcat=True, - narrow=1, - device='cpu' - ): - super().__init__() - - self.size = size - self.n_mlp = n_mlp - self.style_dim = style_dim - self.feat_multiplier = 2 if isconcat else 1 - - layers = [PixelNorm()] - - for i in range(n_mlp): - layers.append( - EqualLinear( - style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu', device=device - ) - ) - - self.style = nn.Sequential(*layers) - - self.channels = { - 4: int(512 * narrow), - 8: int(512 * narrow), - 16: int(512 * narrow), - 32: int(512 * narrow), - 64: int(256 * channel_multiplier * narrow), - 128: int(128 * channel_multiplier * narrow), - 256: int(64 * channel_multiplier * narrow), - 512: int(32 * channel_multiplier * narrow), - 1024: int(16 * channel_multiplier * narrow), - 2048: int(8 * channel_multiplier * narrow) - } - - self.input = ConstantInput(self.channels[4]) - self.conv1 = StyledConv( - self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel, isconcat=isconcat, device=device - ) - self.to_rgb1 = ToRGB(self.channels[4]*self.feat_multiplier, style_dim, upsample=False, device=device) - - self.log_size = int(math.log(size, 2)) - - self.convs = nn.ModuleList() - self.upsamples = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - - in_channel = self.channels[4] - - for i in range(3, self.log_size + 1): - out_channel = self.channels[2 ** i] - - self.convs.append( - StyledConv( - in_channel*self.feat_multiplier, - out_channel, - 3, - style_dim, - upsample=True, - blur_kernel=blur_kernel, - isconcat=isconcat, - device=device - ) - ) - - self.convs.append( - StyledConv( - out_channel*self.feat_multiplier, out_channel, 3, style_dim, blur_kernel=blur_kernel, isconcat=isconcat, device=device - ) - ) - - self.to_rgbs.append(ToRGB(out_channel*self.feat_multiplier, style_dim, device=device)) - - in_channel = out_channel - - self.n_latent = self.log_size * 2 - 2 - - def make_noise(self): - device = self.input.input.device - - noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device)) - - return noises - - def mean_latent(self, n_latent): - latent_in = torch.randn( - n_latent, self.style_dim, device=self.input.input.device - ) - latent = self.style(latent_in).mean(0, keepdim=True) - - return latent - - def get_latent(self, input): - return self.style(input) - - def forward( - self, - styles, - return_latents=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - noise=None, - ): - if not input_is_latent: - styles = [self.style(s) for s in styles] - - if noise==None: - ''' - noise = [None] * (2 * (self.log_size - 2) + 1) - ''' - noise = [] - batch = styles[0].shape[0] - for i in range(self.n_mlp + 1): - size = 2 ** (i+2) - noise.append(torch.randn(batch, self.channels[size], size, size, device=styles[0].device)) - - if truncation < 1: - style_t = [] - - for style in styles: - style_t.append( - truncation_latent + truncation * (style - truncation_latent) - ) - - styles = style_t - - if len(styles) < 2: - inject_index = self.n_latent - - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - - else: - if inject_index==None: - inject_index = random.randint(1, self.n_latent - 1) - - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1) - - latent = torch.cat([latent, latent2], 1) - - out = self.input(latent) - out = self.conv1(out, latent[:, 0], noise=noise[0]) - - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs - ): - out = conv1(out, latent[:, i], noise=noise1) - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) - - i += 2 - - image = skip - - if return_latents: - return image, latent - - else: - return image, None - -class ConvLayer(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - device='cpu' - ): - layers = [] - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1), device=device)) - - stride = 2 - self.padding = 0 - - else: - stride = 1 - self.padding = kernel_size // 2 - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=self.padding, - stride=stride, - bias=bias and not activate, - ) - ) - - if activate: - if bias: - layers.append(FusedLeakyReLU(out_channel, device=device)) - - else: - layers.append(ScaledLeakyReLU(0.2)) - - super().__init__(*layers) - - -class ResBlock(nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1], device='cpu'): - super().__init__() - - self.conv1 = ConvLayer(in_channel, in_channel, 3, device=device) - self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True, device=device) - - self.skip = ConvLayer( - in_channel, out_channel, 1, downsample=True, activate=False, bias=False - ) - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - - return out - -class FullGenerator(nn.Module): - def __init__( - self, - size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - isconcat=True, - narrow=1, - device='cpu' - ): - super().__init__() - channels = { - 4: int(512 * narrow), - 8: int(512 * narrow), - 16: int(512 * narrow), - 32: int(512 * narrow), - 64: int(256 * channel_multiplier * narrow), - 128: int(128 * channel_multiplier * narrow), - 256: int(64 * channel_multiplier * narrow), - 512: int(32 * channel_multiplier * narrow), - 1024: int(16 * channel_multiplier * narrow), - 2048: int(8 * channel_multiplier * narrow) - } - - self.log_size = int(math.log(size, 2)) - self.generator = Generator(size, style_dim, n_mlp, channel_multiplier=channel_multiplier, blur_kernel=blur_kernel, lr_mlp=lr_mlp, isconcat=isconcat, narrow=narrow, device=device) - - conv = [ConvLayer(3, channels[size], 1, device=device)] - self.ecd0 = nn.Sequential(*conv) - in_channel = channels[size] - - self.names = ['ecd%d'%i for i in range(self.log_size-1)] - for i in range(self.log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - #conv = [ResBlock(in_channel, out_channel, blur_kernel)] - conv = [ConvLayer(in_channel, out_channel, 3, downsample=True, device=device)] - setattr(self, self.names[self.log_size-i+1], nn.Sequential(*conv)) - in_channel = out_channel - self.final_linear = nn.Sequential(EqualLinear(channels[4] * 4 * 4, style_dim, activation='fused_lrelu', device=device)) - - def forward(self, - inputs, - return_latents=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - ): - noise = [] - for i in range(self.log_size-1): - ecd = getattr(self, self.names[i]) - inputs = ecd(inputs) - noise.append(inputs) - #print(inputs.shape) - inputs = inputs.view(inputs.shape[0], -1) - outs = self.final_linear(inputs) - #print(outs.shape) - noise = list(itertools.chain.from_iterable(itertools.repeat(x, 2) for x in noise))[::-1] - outs = self.generator([outs], return_latents, inject_index, truncation, truncation_latent, input_is_latent, noise=noise[1:]) - return outs - -class Discriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1], narrow=1, device='cpu'): - super().__init__() - - channels = { - 4: int(512 * narrow), - 8: int(512 * narrow), - 16: int(512 * narrow), - 32: int(512 * narrow), - 64: int(256 * channel_multiplier * narrow), - 128: int(128 * channel_multiplier * narrow), - 256: int(64 * channel_multiplier * narrow), - 512: int(32 * channel_multiplier * narrow), - 1024: int(16 * channel_multiplier * narrow), - 2048: int(8 * channel_multiplier * narrow) - } - - convs = [ConvLayer(3, channels[size], 1, device=device)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel, device=device)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3, device=device) - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu', device=device), - EqualLinear(channels[4], 1), - ) - - def forward(self, input): - out = self.convs(input) - - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, self.stddev_feat, channel // self.stddev_feat, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - - out = out.view(batch, -1) - out = self.final_linear(out) - return out - -class FullGenerator_SR(nn.Module): - def __init__( - self, - size, - out_size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - isconcat=True, - narrow=1, - device='cpu' - ): - super().__init__() - channels = { - 4: int(512 * narrow), - 8: int(512 * narrow), - 16: int(512 * narrow), - 32: int(512 * narrow), - 64: int(256 * channel_multiplier * narrow), - 128: int(128 * channel_multiplier * narrow), - 256: int(64 * channel_multiplier * narrow), - 512: int(32 * channel_multiplier * narrow), - 1024: int(16 * channel_multiplier * narrow), - 2048: int(8 * channel_multiplier * narrow), - } - - self.log_insize = int(math.log(size, 2)) - self.log_outsize = int(math.log(out_size, 2)) - self.generator = Generator(out_size, style_dim, n_mlp, channel_multiplier=channel_multiplier, blur_kernel=blur_kernel, lr_mlp=lr_mlp, isconcat=isconcat, narrow=narrow, device=device) - - conv = [ConvLayer(3, channels[size], 1, device=device)] - self.ecd0 = nn.Sequential(*conv) - in_channel = channels[size] - - self.names = ['ecd%d'%i for i in range(self.log_insize-1)] - for i in range(self.log_insize, 2, -1): - out_channel = channels[2 ** (i - 1)] - #conv = [ResBlock(in_channel, out_channel, blur_kernel)] - conv = [ConvLayer(in_channel, out_channel, 3, downsample=True, device=device)] - setattr(self, self.names[self.log_insize-i+1], nn.Sequential(*conv)) - in_channel = out_channel - self.final_linear = nn.Sequential(EqualLinear(channels[4] * 4 * 4, style_dim, activation='fused_lrelu', device=device)) - - def forward(self, - inputs, - return_latents=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - ): - noise = [] - for i in range(self.log_outsize-self.log_insize): - noise.append(None) - for i in range(self.log_insize-1): - ecd = getattr(self, self.names[i]) - inputs = ecd(inputs) - noise.append(inputs) - #print(inputs.shape) - inputs = inputs.view(inputs.shape[0], -1) - outs = self.final_linear(inputs) - #print(outs.shape) - noise = list(itertools.chain.from_iterable(itertools.repeat(x, 2) for x in noise))[::-1] - image, latent = self.generator([outs], return_latents, inject_index, truncation, truncation_latent, input_is_latent, noise=noise[1:]) - return image, latent \ No newline at end of file diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/__init__.py b/spaces/ICML2022/OFA/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/__init__.py deleted file mode 100644 index 117827c3e9c176477f33e3a6fd7fe19a922411a2..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .model import * # noqa diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/modules/transformer_layer.py b/spaces/ICML2022/OFA/fairseq/fairseq/modules/transformer_layer.py deleted file mode 100644 index 347b8118daa2818af5e0230a793f2fa8fcd63b3a..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/modules/transformer_layer.py +++ /dev/null @@ -1,459 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Dict, List, Optional - -import torch -import torch.nn as nn -from fairseq import utils -from fairseq.modules import LayerNorm, MultiheadAttention -from fairseq.modules.fairseq_dropout import FairseqDropout -from fairseq.modules.quant_noise import quant_noise -from torch import Tensor -from fairseq.models.transformer import ( - TransformerConfig, -) - - -class TransformerEncoderLayerBase(nn.Module): - """Encoder layer block. - - In the original paper each operation (multi-head attention or FFN) is - postprocessed with: `dropout -> add residual -> layernorm`. In the - tensor2tensor code they suggest that learning is more robust when - preprocessing each layer with layernorm and postprocessing with: - `dropout -> add residual`. We default to the approach in the paper, but the - tensor2tensor approach can be enabled by setting - *cfg.encoder.normalize_before* to ``True``. - - Args: - args (argparse.Namespace): parsed command-line arguments - """ - - def __init__(self, cfg): - super().__init__() - self.cfg = cfg - self.embed_dim = cfg.encoder.embed_dim - self.quant_noise = cfg.quant_noise.pq - self.quant_noise_block_size = cfg.quant_noise.pq_block_size - self.self_attn = self.build_self_attention(self.embed_dim, cfg) - self.self_attn_layer_norm = LayerNorm(self.embed_dim, export=cfg.export) - self.dropout_module = FairseqDropout( - cfg.dropout, module_name=self.__class__.__name__ - ) - self.activation_fn = utils.get_activation_fn(activation=cfg.activation_fn) - activation_dropout_p = cfg.activation_dropout - if activation_dropout_p == 0: - # for backwards compatibility with models that use cfg.relu_dropout - activation_dropout_p = cfg.relu_dropout or 0 - self.activation_dropout_module = FairseqDropout( - float(activation_dropout_p), module_name=self.__class__.__name__ - ) - self.normalize_before = cfg.encoder.normalize_before - self.fc1 = self.build_fc1( - self.embed_dim, - cfg.encoder.ffn_embed_dim, - self.quant_noise, - self.quant_noise_block_size, - ) - self.fc2 = self.build_fc2( - cfg.encoder.ffn_embed_dim, - self.embed_dim, - self.quant_noise, - self.quant_noise_block_size, - ) - - self.final_layer_norm = LayerNorm(self.embed_dim, export=cfg.export) - - def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise( - nn.Linear(input_dim, output_dim), p=q_noise, block_size=qn_block_size - ) - - def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise( - nn.Linear(input_dim, output_dim), p=q_noise, block_size=qn_block_size - ) - - def build_self_attention(self, embed_dim, cfg): - return MultiheadAttention( - embed_dim, - cfg.encoder.attention_heads, - dropout=cfg.attention_dropout, - self_attention=True, - q_noise=self.quant_noise, - qn_block_size=self.quant_noise_block_size, - ) - - def residual_connection(self, x, residual): - return residual + x - - def upgrade_state_dict_named(self, state_dict, name): - """ - Rename layer norm states from `...layer_norms.0.weight` to - `...self_attn_layer_norm.weight` and `...layer_norms.1.weight` to - `...final_layer_norm.weight` - """ - layer_norm_map = {"0": "self_attn_layer_norm", "1": "final_layer_norm"} - for old, new in layer_norm_map.items(): - for m in ("weight", "bias"): - k = "{}.layer_norms.{}.{}".format(name, old, m) - if k in state_dict: - state_dict["{}.{}.{}".format(name, new, m)] = state_dict[k] - del state_dict[k] - - def forward( - self, - x, - encoder_padding_mask: Optional[Tensor], - attn_mask: Optional[Tensor] = None, - ): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor): binary ByteTensor of shape - `(batch, seq_len)` where padding elements are indicated by ``1``. - attn_mask (ByteTensor): binary tensor of shape `(tgt_len, src_len)`, - where `tgt_len` is the length of output and `src_len` is the - length of input, though here both are equal to `seq_len`. - `attn_mask[tgt_i, src_j] = 1` means that when calculating the - embedding for `tgt_i`, we exclude (mask out) `src_j`. This is - useful for strided self-attention. - - Returns: - encoded output of shape `(seq_len, batch, embed_dim)` - """ - # anything in original attn_mask = 1, becomes -1e8 - # anything in original attn_mask = 0, becomes 0 - # Note that we cannot use -inf here, because at some edge cases, - # the attention weight (before softmax) for some padded element in query - # will become -inf, which results in NaN in model parameters - if attn_mask is not None: - attn_mask = attn_mask.masked_fill( - attn_mask.to(torch.bool), - -1e8 if x.dtype == torch.float32 else -1e4 - ) - - residual = x - if self.normalize_before: - x = self.self_attn_layer_norm(x) - x, _ = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=encoder_padding_mask, - need_weights=False, - attn_mask=attn_mask, - ) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.self_attn_layer_norm(x) - - residual = x - if self.normalize_before: - x = self.final_layer_norm(x) - x = self.activation_fn(self.fc1(x)) - x = self.activation_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.final_layer_norm(x) - return x - - -# backward compatible with the legacy argparse format -class TransformerEncoderLayer(TransformerEncoderLayerBase): - def __init__(self, args): - super().__init__(TransformerConfig.from_namespace(args)) - self.args = args - - def build_self_attention(self, embed_dim, args): - return super().build_self_attention( - embed_dim, TransformerConfig.from_namespace(args) - ) - - -class TransformerDecoderLayerBase(nn.Module): - """Decoder layer block. - - In the original paper each operation (multi-head attention, encoder - attention or FFN) is postprocessed with: `dropout -> add residual -> - layernorm`. In the tensor2tensor code they suggest that learning is more - robust when preprocessing each layer with layernorm and postprocessing with: - `dropout -> add residual`. We default to the approach in the paper, but the - tensor2tensor approach can be enabled by setting - *cfg.decoder.normalize_before* to ``True``. - - Args: - args (argparse.Namespace): parsed command-line arguments - no_encoder_attn (bool, optional): whether to attend to encoder outputs - (default: False). - """ - - def __init__( - self, cfg, no_encoder_attn=False, add_bias_kv=False, add_zero_attn=False - ): - super().__init__() - self.embed_dim = cfg.decoder.embed_dim - self.dropout_module = FairseqDropout( - cfg.dropout, module_name=self.__class__.__name__ - ) - self.quant_noise = cfg.quant_noise.pq - self.quant_noise_block_size = cfg.quant_noise.pq_block_size - - self.cross_self_attention = cfg.cross_self_attention - - self.self_attn = self.build_self_attention( - self.embed_dim, - cfg, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - ) - - self.activation_fn = utils.get_activation_fn(activation=cfg.activation_fn) - activation_dropout_p = cfg.activation_dropout - if activation_dropout_p == 0: - # for backwards compatibility with models that use cfg.relu_dropout - activation_dropout_p = cfg.relu_dropout or 0 - self.activation_dropout_module = FairseqDropout( - float(activation_dropout_p), module_name=self.__class__.__name__ - ) - self.normalize_before = cfg.decoder.normalize_before - - self.self_attn_layer_norm = LayerNorm(self.embed_dim, export=cfg.export) - - if no_encoder_attn: - self.encoder_attn = None - self.encoder_attn_layer_norm = None - else: - self.encoder_attn = self.build_encoder_attention(self.embed_dim, cfg) - self.encoder_attn_layer_norm = LayerNorm(self.embed_dim, export=cfg.export) - - self.fc1 = self.build_fc1( - self.embed_dim, - cfg.decoder.ffn_embed_dim, - self.quant_noise, - self.quant_noise_block_size, - ) - self.fc2 = self.build_fc2( - cfg.decoder.ffn_embed_dim, - self.embed_dim, - self.quant_noise, - self.quant_noise_block_size, - ) - - self.final_layer_norm = LayerNorm(self.embed_dim, export=cfg.export) - self.need_attn = True - - self.onnx_trace = False - - def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise(nn.Linear(input_dim, output_dim), q_noise, qn_block_size) - - def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise(nn.Linear(input_dim, output_dim), q_noise, qn_block_size) - - def build_self_attention( - self, embed_dim, cfg, add_bias_kv=False, add_zero_attn=False - ): - return MultiheadAttention( - embed_dim, - cfg.decoder.attention_heads, - dropout=cfg.attention_dropout, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - self_attention=not cfg.cross_self_attention, - q_noise=self.quant_noise, - qn_block_size=self.quant_noise_block_size, - ) - - def build_encoder_attention(self, embed_dim, cfg): - return MultiheadAttention( - embed_dim, - cfg.decoder.attention_heads, - kdim=cfg.encoder.embed_dim, - vdim=cfg.encoder.embed_dim, - dropout=cfg.attention_dropout, - encoder_decoder_attention=True, - q_noise=self.quant_noise, - qn_block_size=self.quant_noise_block_size, - ) - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - def residual_connection(self, x, residual): - return residual + x - - def forward( - self, - x, - encoder_out: Optional[torch.Tensor] = None, - encoder_padding_mask: Optional[torch.Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - prev_self_attn_state: Optional[List[torch.Tensor]] = None, - prev_attn_state: Optional[List[torch.Tensor]] = None, - self_attn_mask: Optional[torch.Tensor] = None, - self_attn_padding_mask: Optional[torch.Tensor] = None, - need_attn: bool = False, - need_head_weights: bool = False, - ): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor, optional): binary - ByteTensor of shape `(batch, src_len)` where padding - elements are indicated by ``1``. - need_attn (bool, optional): return attention weights - need_head_weights (bool, optional): return attention weights - for each head (default: return average over heads). - - Returns: - encoded output of shape `(seq_len, batch, embed_dim)` - """ - if need_head_weights: - need_attn = True - - residual = x - if self.normalize_before: - x = self.self_attn_layer_norm(x) - if prev_self_attn_state is not None: - prev_key, prev_value = prev_self_attn_state[:2] - saved_state: Dict[str, Optional[Tensor]] = { - "prev_key": prev_key, - "prev_value": prev_value, - } - if len(prev_self_attn_state) >= 3: - saved_state["prev_key_padding_mask"] = prev_self_attn_state[2] - assert incremental_state is not None - self.self_attn._set_input_buffer(incremental_state, saved_state) - _self_attn_input_buffer = self.self_attn._get_input_buffer(incremental_state) - if self.cross_self_attention and not ( - incremental_state is not None - and _self_attn_input_buffer is not None - and "prev_key" in _self_attn_input_buffer - ): - if self_attn_mask is not None: - assert encoder_out is not None - self_attn_mask = torch.cat( - (x.new_zeros(x.size(0), encoder_out.size(0)), self_attn_mask), dim=1 - ) - if self_attn_padding_mask is not None: - if encoder_padding_mask is None: - assert encoder_out is not None - encoder_padding_mask = self_attn_padding_mask.new_zeros( - encoder_out.size(1), encoder_out.size(0) - ) - self_attn_padding_mask = torch.cat( - (encoder_padding_mask, self_attn_padding_mask), dim=1 - ) - assert encoder_out is not None - y = torch.cat((encoder_out, x), dim=0) - else: - y = x - - x, attn = self.self_attn( - query=x, - key=y, - value=y, - key_padding_mask=self_attn_padding_mask, - incremental_state=incremental_state, - need_weights=False, - attn_mask=self_attn_mask, - ) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.self_attn_layer_norm(x) - - if self.encoder_attn is not None and encoder_out is not None: - residual = x - if self.normalize_before: - x = self.encoder_attn_layer_norm(x) - if prev_attn_state is not None: - prev_key, prev_value = prev_attn_state[:2] - saved_state: Dict[str, Optional[Tensor]] = { - "prev_key": prev_key, - "prev_value": prev_value, - } - if len(prev_attn_state) >= 3: - saved_state["prev_key_padding_mask"] = prev_attn_state[2] - assert incremental_state is not None - self.encoder_attn._set_input_buffer(incremental_state, saved_state) - - x, attn = self.encoder_attn( - query=x, - key=encoder_out, - value=encoder_out, - key_padding_mask=encoder_padding_mask, - incremental_state=incremental_state, - static_kv=True, - need_weights=need_attn or (not self.training and self.need_attn), - need_head_weights=need_head_weights, - ) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.encoder_attn_layer_norm(x) - - residual = x - if self.normalize_before: - x = self.final_layer_norm(x) - - x = self.activation_fn(self.fc1(x)) - x = self.activation_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.final_layer_norm(x) - if self.onnx_trace and incremental_state is not None: - saved_state = self.self_attn._get_input_buffer(incremental_state) - assert saved_state is not None - if self_attn_padding_mask is not None: - self_attn_state = [ - saved_state["prev_key"], - saved_state["prev_value"], - saved_state["prev_key_padding_mask"], - ] - else: - self_attn_state = [saved_state["prev_key"], saved_state["prev_value"]] - return x, attn, self_attn_state - return x, attn, None - - def make_generation_fast_(self, need_attn: bool = False, **kwargs): - self.need_attn = need_attn - - -# backward compatible with the legacy argparse format -class TransformerDecoderLayer(TransformerDecoderLayerBase): - def __init__( - self, args, no_encoder_attn=False, add_bias_kv=False, add_zero_attn=False - ): - super().__init__( - TransformerConfig.from_namespace(args), - no_encoder_attn=no_encoder_attn, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - ) - self.args = args - - def build_self_attention( - self, embed_dim, args, add_bias_kv=False, add_zero_attn=False - ): - return super().build_self_attention( - embed_dim, - TransformerConfig.from_namespace(args), - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - ) - - def build_encoder_attention(self, embed_dim, args): - return super().build_encoder_attention( - embed_dim, - TransformerConfig.from_namespace(args), - ) diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/person_detection_video.py b/spaces/Ibtehaj10/cheating-detection-FYP/person_detection_video.py deleted file mode 100644 index fbd6f742afca23acd2debe99679f8c70f9153adb..0000000000000000000000000000000000000000 --- a/spaces/Ibtehaj10/cheating-detection-FYP/person_detection_video.py +++ /dev/null @@ -1,71 +0,0 @@ -import cv2 -import datetime -import imutils -import numpy as np - -protopath = "MobileNetSSD_deploy.prototxt" -modelpath = "MobileNetSSD_deploy.caffemodel" -detector = cv2.dnn.readNetFromCaffe(prototxt=protopath, caffeModel=modelpath) -# Only enable it if you are using OpenVino environment -# detector.setPreferableBackend(cv2.dnn.DNN_BACKEND_INFERENCE_ENGINE) -# detector.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU) - - -CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat", - "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", - "dog", "horse", "motorbike", "person", "pottedplant", "sheep", - "sofa", "train", "tvmonitor"] - - -def main(): - cap = cv2.VideoCapture('test_video.mp4') - - fps_start_time = datetime.datetime.now() - fps = 0 - total_frames = 0 - - while True: - ret, frame = cap.read() - frame = imutils.resize(frame, width=600) - total_frames = total_frames + 1 - - (H, W) = frame.shape[:2] - - blob = cv2.dnn.blobFromImage(frame, 0.007843, (W, H), 127.5) - - detector.setInput(blob) - person_detections = detector.forward() - - for i in np.arange(0, person_detections.shape[2]): - confidence = person_detections[0, 0, i, 2] - if confidence > 0.5: - idx = int(person_detections[0, 0, i, 1]) - - if CLASSES[idx] != "person": - continue - - person_box = person_detections[0, 0, i, 3:7] * np.array([W, H, W, H]) - (startX, startY, endX, endY) = person_box.astype("int") - - cv2.rectangle(frame, (startX, startY), (endX, endY), (0, 0, 255), 2) - - fps_end_time = datetime.datetime.now() - time_diff = fps_end_time - fps_start_time - if time_diff.seconds == 0: - fps = 0.0 - else: - fps = (total_frames / time_diff.seconds) - - fps_text = "FPS: {:.2f}".format(fps) - - cv2.putText(frame, fps_text, (5, 30), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (0, 0, 255), 1) - - cv2.imshow("Application", frame) - key = cv2.waitKey(1) - if key == ord('q'): - break - - cv2.destroyAllWindows() - - -main() diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/bin/train.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/bin/train.py deleted file mode 100644 index be9ca8c6ef2a0cb9143ab6a0f4d91f571b691a95..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/bin/train.py +++ /dev/null @@ -1,72 +0,0 @@ -#!/usr/bin/env python3 - -import logging -import os -import sys -import traceback - -os.environ['OMP_NUM_THREADS'] = '1' -os.environ['OPENBLAS_NUM_THREADS'] = '1' -os.environ['MKL_NUM_THREADS'] = '1' -os.environ['VECLIB_MAXIMUM_THREADS'] = '1' -os.environ['NUMEXPR_NUM_THREADS'] = '1' - -import hydra -from omegaconf import OmegaConf -from pytorch_lightning import Trainer -from pytorch_lightning.callbacks import ModelCheckpoint -from pytorch_lightning.loggers import TensorBoardLogger -from pytorch_lightning.plugins import DDPPlugin - -from saicinpainting.training.trainers import make_training_model -from saicinpainting.utils import register_debug_signal_handlers, handle_ddp_subprocess, handle_ddp_parent_process, \ - handle_deterministic_config - -LOGGER = logging.getLogger(__name__) - - -@handle_ddp_subprocess() -@hydra.main(config_path='../configs/training', config_name='tiny_test.yaml') -def main(config: OmegaConf): - try: - need_set_deterministic = handle_deterministic_config(config) - - register_debug_signal_handlers() # kill -10 will result in traceback dumped into log - - is_in_ddp_subprocess = handle_ddp_parent_process() - - config.visualizer.outdir = os.path.join(os.getcwd(), config.visualizer.outdir) - if not is_in_ddp_subprocess: - LOGGER.info(OmegaConf.to_yaml(config)) - OmegaConf.save(config, os.path.join(os.getcwd(), 'config.yaml')) - - checkpoints_dir = os.path.join(os.getcwd(), 'models') - os.makedirs(checkpoints_dir, exist_ok=True) - - # there is no need to suppress this logger in ddp, because it handles rank on its own - metrics_logger = TensorBoardLogger(config.location.tb_dir, name=os.path.basename(os.getcwd())) - metrics_logger.log_hyperparams(config) - - training_model = make_training_model(config) - - trainer_kwargs = OmegaConf.to_container(config.trainer.kwargs, resolve=True) - if need_set_deterministic: - trainer_kwargs['deterministic'] = True - - trainer = Trainer( - # there is no need to suppress checkpointing in ddp, because it handles rank on its own - callbacks=ModelCheckpoint(dirpath=checkpoints_dir, **config.trainer.checkpoint_kwargs), - logger=metrics_logger, - default_root_dir=os.getcwd(), - **trainer_kwargs - ) - trainer.fit(training_model) - except KeyboardInterrupt: - LOGGER.warning('Interrupted by user') - except Exception as ex: - LOGGER.critical(f'Training failed due to {ex}:\n{traceback.format_exc()}') - sys.exit(1) - - -if __name__ == '__main__': - main() diff --git a/spaces/JFoz/CoherentControl/app.py b/spaces/JFoz/CoherentControl/app.py deleted file mode 100644 index 5ee593f4c0805c751c913298e7bd9605e85d7770..0000000000000000000000000000000000000000 --- a/spaces/JFoz/CoherentControl/app.py +++ /dev/null @@ -1,24 +0,0 @@ -import gradio as gr -import torch - -from model import Model -from app_pose import create_demo as create_demo_pose -import argparse -import os - -model = Model() - - - - -with gr.Blocks(css='style.css') as demo: - - with gr.Tab('Pose Conditional'): - create_demo_pose(model) - ''' - ''' - - - -demo.launch(debug=True) - diff --git a/spaces/JUNGU/pixera_gen/methods/img2pixl.py b/spaces/JUNGU/pixera_gen/methods/img2pixl.py deleted file mode 100644 index 4ec29594ebe6e56eaf64746fe3aed6cdb2c7a2f3..0000000000000000000000000000000000000000 --- a/spaces/JUNGU/pixera_gen/methods/img2pixl.py +++ /dev/null @@ -1,71 +0,0 @@ -import cv2 -import numpy as np -from PIL import Image - -class pixL: - #Author: Alican Akca - def __init__(self,numOfSquaresW = None, numOfSquaresH= None, size = [False, (512,512)],square = 6,ImgH = None,ImgW = None,images = [],background_image = None): - self.images = images - self.size = size - self.ImgH = ImgH - self.ImgW = ImgW - self.square = square - self.numOfSquaresW = numOfSquaresW - self.numOfSquaresH = numOfSquaresH - - def preprocess(self): - for image in self.images: - - size = (image.shape[0] - (image.shape[0] % 4), image.shape[1] - (image.shape[1] % 4)) - image = cv2.resize(image, size) - image = cv2.cvtColor(image.astype(np.uint8), cv2.COLOR_BGR2RGB) - - if len(self.images) == 1: - return self.images[0] - else: - return self.images - - def toThePixL(self,images, pixel_size): - self.images = [] - self.square = pixel_size - for image in images: - image = Image.fromarray(image) - image = image.convert("RGB") - self.ImgW, self.ImgH = image.size - self.images.append(pixL.epicAlgorithm(self, image)) - - return pixL.preprocess(self) - - def numOfSquaresFunc(self): - self.numOfSquaresW = round((self.ImgW / self.square) + 1) - self.numOfSquaresH = round((self.ImgH / self.square) + 1) - - def epicAlgorithm(self, image): - pixValues = [] - pixL.numOfSquaresFunc(self) - - for j in range(1,self.numOfSquaresH): - - for i in range(1,self.numOfSquaresW): - - pixValues.append((image.getpixel(( - i * self.square - self.square//2, - j * self.square - self.square//2)), - (i * self.square - self.square//2, - j * self.square - self.square//2))) - - background = 255 * np.ones(shape=[self.ImgH - self.square, - self.ImgW - self.square*2, 3], - dtype=np.uint8) - - for pen in range(len(pixValues)): - - cv2.rectangle(background, - pt1=(pixValues[pen][1][0] - self.square,pixValues[pen][1][1] - self.square), - pt2=(pixValues[pen][1][0] + self.square,pixValues[pen][1][1] + self.square), - color=(pixValues[pen][0][2],pixValues[pen][0][1],pixValues[pen][0][0]), - thickness=-1) - background = np.array(background).astype(np.uint8) - background = cv2.resize(background, (self.ImgW,self.ImgH), interpolation = cv2.INTER_AREA) - - return background \ No newline at end of file diff --git a/spaces/Jamkonams/AutoGPT/main.py b/spaces/Jamkonams/AutoGPT/main.py deleted file mode 100644 index 160addc390b94a8b143a3a2e18991a560f9b032e..0000000000000000000000000000000000000000 --- a/spaces/Jamkonams/AutoGPT/main.py +++ /dev/null @@ -1 +0,0 @@ -from autogpt import main diff --git a/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/utils/registry.py b/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/utils/registry.py deleted file mode 100644 index 655753b3b9cbd0cfe73fe93a77cf1fcc3db6d827..0000000000000000000000000000000000000000 --- a/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/utils/registry.py +++ /dev/null @@ -1,82 +0,0 @@ -# Modified from: https://github.com/facebookresearch/fvcore/blob/master/fvcore/common/registry.py # noqa: E501 - - -class Registry(): - """ - The registry that provides name -> object mapping, to support third-party - users' custom modules. - - To create a registry (e.g. a backbone registry): - - .. code-block:: python - - BACKBONE_REGISTRY = Registry('BACKBONE') - - To register an object: - - .. code-block:: python - - @BACKBONE_REGISTRY.register() - class MyBackbone(): - ... - - Or: - - .. code-block:: python - - BACKBONE_REGISTRY.register(MyBackbone) - """ - - def __init__(self, name): - """ - Args: - name (str): the name of this registry - """ - self._name = name - self._obj_map = {} - - def _do_register(self, name, obj): - assert (name not in self._obj_map), (f"An object named '{name}' was already registered " - f"in '{self._name}' registry!") - self._obj_map[name] = obj - - def register(self, obj=None): - """ - Register the given object under the the name `obj.__name__`. - Can be used as either a decorator or not. - See docstring of this class for usage. - """ - if obj is None: - # used as a decorator - def deco(func_or_class): - name = func_or_class.__name__ - self._do_register(name, func_or_class) - return func_or_class - - return deco - - # used as a function call - name = obj.__name__ - self._do_register(name, obj) - - def get(self, name): - ret = self._obj_map.get(name) - if ret is None: - raise KeyError(f"No object named '{name}' found in '{self._name}' registry!") - return ret - - def __contains__(self, name): - return name in self._obj_map - - def __iter__(self): - return iter(self._obj_map.items()) - - def keys(self): - return self._obj_map.keys() - - -DATASET_REGISTRY = Registry('dataset') -ARCH_REGISTRY = Registry('arch') -MODEL_REGISTRY = Registry('model') -LOSS_REGISTRY = Registry('loss') -METRIC_REGISTRY = Registry('metric') diff --git a/spaces/Jean-Baptiste/email_parser/email_parser/_models_signatures.py b/spaces/Jean-Baptiste/email_parser/email_parser/_models_signatures.py deleted file mode 100644 index 88f0d3c42ad0874bebcfb4df038f1de79a1e5b70..0000000000000000000000000000000000000000 --- a/spaces/Jean-Baptiste/email_parser/email_parser/_models_signatures.py +++ /dev/null @@ -1,189 +0,0 @@ -import logging -import pandas as pd -import numpy as np -import regex -import os -import configparser -from sentence_transformers import SentenceTransformer -from scipy.spatial import distance -from keras.preprocessing.sequence import pad_sequences -from sklearn.preprocessing import StandardScaler -from sklearn.preprocessing import MinMaxScaler - -from tensorflow import keras -import pickle - -from . import nlp, utils - -config = configparser.ConfigParser() -config.read(os.path.join(os.path.dirname(__file__), 'config.ini')) - - - -model_name = config["DEFAULT"]["name_model_signature"] - -model = keras.models.load_model(filepath=utils.get_model_full_path(model_name)) -minmax_scaler = pickle.load(open(utils.get_model_full_path(model_name +"/minmax_scaler.p"), "rb")) -standard_scaler = pickle.load(open(utils.get_model_full_path(model_name +"/standard_scaler.p"), "rb")) - - -list_name_columns_features = ["line_number", - "text", - "start", - "end", - "PER", "ORG", "LOC", "DATE", "TEL", "EMAIL", "WEB", - "SIGNATURE", - "word_count", - "inv_distance_to_merci", - "inv_distance_to_cordlt", - "inv_distance_to_regards", - "inv_distance_to_sincerely", - "inv_distance_to_sent_from", - "start_with_ps", "position_line", - "special_characters_count", "empty_chars_with_prev_line"] - -list_columns_used_in_model = ["PER", "ORG", "LOC", "DATE", "TEL", "EMAIL", - # "WEB", - "word_count", - "inv_distance_to_merci", - "inv_distance_to_cordlt", - # "inv_distance_to_regards", - "inv_distance_to_sincerely", - "inv_distance_to_sent_from", - "start_with_ps", - "position_line", - "special_characters_count", - "empty_chars_with_prev_line"] - -columns_to_scale_minmax = ["PER", "ORG", "LOC", "DATE", "TEL", "EMAIL", "WEB", "position_line", - "empty_chars_with_prev_line", - "inv_distance_to_merci", - "inv_distance_to_cordlt", - "inv_distance_to_regards", - "inv_distance_to_sincerely", - "inv_distance_to_sent_from", - "start_with_ps" - ] - -columns_to_scale_standard = ["word_count", "special_characters_count"] - -def f_retrieve_entities_for_line(df_ner, start=0, end=1e12): - """Retrieve all entities in the previously computed dataframe for a specific line - - Args: - df_ner: dataframe containing found entities - start: start position of the line in original text - end: end position of the line in original text - - """ - - if len(df_ner) > 0: - df = df_ner.query(f"""(start>= {start} and end <= {end}) or (start<={start} and end>={end})""") - return df - - -embedder_model = SentenceTransformer("distiluse-base-multilingual-cased-v1") - - -def f_create_embedding_inv_dist_feature(text1, text2): - """ Computing distance between two texts based on their embedding - provided by the SentenceTransformer above""" - embedding_merci = embedder_model.encode(text1) - embedding_line = embedder_model.encode(text2) - dist = distance.cosine(embedding_merci, embedding_line) - return min(5, 1 / (dist + 0.0001)) - - -def f_create_email_lines_features(text, df_ner=None, position_offset=0): - list_lines = nlp.f_split_text_by_lines(text, position_offset) - list_features_vectors = [] - if df_ner is None: - df_ner = nlp.f_ner(text) - - for line_number in range(0, len(list_lines)): - list_features_vectors.append(f_create_line_features(list_lines, line_number, df_ner)) - - df_features = pd.DataFrame(list_features_vectors, columns=list_name_columns_features) - - return df_features - - - -def f_create_line_features(list_lines, line_number, df_ner): - current_line = list_lines[line_number] - total_lines = len(list_lines) - features_vector = [line_number, current_line[2], current_line[0], current_line[1]] - logging.debug(f"Creating line features for {current_line}") - df_ner_line = f_retrieve_entities_for_line(df_ner=df_ner, start=current_line[0], end=current_line[1]) - - # Adding entity to feature vector - for entity in ["PER", "ORG", "LOC", "DATE", "TEL", "EMAIL", "WEB", "SIGNATURE"]: - value = len(df_ner_line.query(f"entity=='{entity}'")) if df_ner_line is not None else 0 - features_vector.append(value) - # Adding word count - features_vector.append(len(current_line[2].split())) - # distance to greeting word "merci" - features_vector.append(f_create_embedding_inv_dist_feature("merci", current_line[2].lower())) - - # distance to greeting word "merci" - features_vector.append(f_create_embedding_inv_dist_feature("cordialement", current_line[2].lower())) - - # distance to greeting word "regards" - features_vector.append(f_create_embedding_inv_dist_feature("regards", current_line[2].lower())) - - # distance to greeting word "regards" - features_vector.append(f_create_embedding_inv_dist_feature("sincerely", current_line[2].lower())) - - # distance to word "sent from" - features_vector.append(f_create_embedding_inv_dist_feature("sent from", current_line[2].lower())) - - # Line start with ps: - features_vector.append(regex.match(r"\s*ps *:", current_line[2], flags=regex.IGNORECASE ) is not None) - - # Adding position line in email - position_in_email = (line_number + 1) / total_lines - features_vector.append(position_in_email) - # Adding special character count - special_char_count = len(regex.findall(r"[^\p{L}0-9 .,\n]", current_line[2])) - features_vector.append(special_char_count) - # Number of empty chars with previous line - empty_chars_with_prev_line = 0 if line_number == 0 else current_line[0] - list_lines[line_number - 1][1] - features_vector.append(empty_chars_with_prev_line) - return features_vector - - -def generate_x_y(df, minmax_scaler=None, standard_scaler=None, n_last_lines_to_keep=30, - list_columns=list_columns_used_in_model): - df, minmax_scaler, standard_scaler = f_scale_parameters(df, minmax_scaler, standard_scaler) - x = df[list_columns].to_numpy()[-n_last_lines_to_keep:, :] - x = np.expand_dims(x, axis=0) - x = pad_sequences(x, dtype='float64', value=0, maxlen=n_last_lines_to_keep) - - y = df["is_signature"].to_numpy()[-n_last_lines_to_keep:] - y = np.expand_dims(y, axis=0) - y_out = pad_sequences(y, value=0, maxlen=n_last_lines_to_keep) - y_mask = pad_sequences(y, value=-1, maxlen=n_last_lines_to_keep) - return x, y_out, y_mask, minmax_scaler, standard_scaler - -def f_scale_parameters(df_tagged_data, minmax_scaler=None, standard_scaler=None): - # df_tagged_data = df_tagged_data.copy(deep=True) - if minmax_scaler is None: - logging.debug("fitting new min max scaller") - minmax_scaler = MinMaxScaler() - df_tagged_data.loc[:, columns_to_scale_minmax] = minmax_scaler.fit_transform( - df_tagged_data[columns_to_scale_minmax]) - else: - logging.debug("using already fitted minmax scaler") - df_tagged_data.loc[:, columns_to_scale_minmax] = minmax_scaler.transform( - df_tagged_data[columns_to_scale_minmax]) - - if standard_scaler is None: - logging.debug("fitting new standard scaler") - standard_scaler = StandardScaler() - df_tagged_data.loc[:, columns_to_scale_standard] = standard_scaler.fit_transform( - df_tagged_data[columns_to_scale_standard]) - else: - logging.debug("using already fitted scaler") - df_tagged_data.loc[:, columns_to_scale_standard] = standard_scaler.transform( - df_tagged_data[columns_to_scale_standard]) - return df_tagged_data, minmax_scaler, standard_scaler \ No newline at end of file diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/utils.py b/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/utils.py deleted file mode 100644 index fcc7d4b198a8e796d3ef5016c8eb0226ca4d6f9a..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/utils.py +++ /dev/null @@ -1,683 +0,0 @@ -# -*- coding:utf-8 -*- -from __future__ import annotations -from typing import TYPE_CHECKING, Any, Callable, Dict, List, Tuple, Type -import logging -import commentjson as json -import os -import datetime -import csv -import requests -import re -import html -import hashlib - -import gradio as gr -from pypinyin import lazy_pinyin -import tiktoken -from markdown import markdown -from pygments import highlight -from pygments.lexers import get_lexer_by_name -from pygments.formatters import HtmlFormatter -import pandas as pd - -from modules.presets import * -from . import shared -from modules.config import retrieve_proxy, hide_history_when_not_logged_in - -if TYPE_CHECKING: - from typing import TypedDict - - class DataframeData(TypedDict): - headers: List[str] - data: List[List[str | int | bool]] - -def predict(current_model, *args): - iter = current_model.predict(*args) - for i in iter: - yield i - -def billing_info(current_model): - return current_model.billing_info() - -def set_key(current_model, *args): - return current_model.set_key(*args) - -def load_chat_history(current_model, *args): - return current_model.load_chat_history(*args) - -def delete_chat_history(current_model, *args): - return current_model.delete_chat_history(*args) - -def interrupt(current_model, *args): - return current_model.interrupt(*args) - -def reset(current_model, *args): - return current_model.reset(*args) - -def retry(current_model, *args): - iter = current_model.retry(*args) - for i in iter: - yield i - -def delete_first_conversation(current_model, *args): - return current_model.delete_first_conversation(*args) - -def delete_last_conversation(current_model, *args): - return current_model.delete_last_conversation(*args) - -def set_system_prompt(current_model, *args): - return current_model.set_system_prompt(*args) - -def save_chat_history(current_model, *args): - return current_model.save_chat_history(*args) - -def export_markdown(current_model, *args): - return current_model.export_markdown(*args) - -def load_chat_history(current_model, *args): - return current_model.load_chat_history(*args) - -def upload_chat_history(current_model, *args): - return current_model.load_chat_history(*args) - -def set_token_upper_limit(current_model, *args): - return current_model.set_token_upper_limit(*args) - -def set_temperature(current_model, *args): - current_model.set_temperature(*args) - -def set_top_p(current_model, *args): - current_model.set_top_p(*args) - -def set_n_choices(current_model, *args): - current_model.set_n_choices(*args) - -def set_stop_sequence(current_model, *args): - current_model.set_stop_sequence(*args) - -def set_max_tokens(current_model, *args): - current_model.set_max_tokens(*args) - -def set_presence_penalty(current_model, *args): - current_model.set_presence_penalty(*args) - -def set_frequency_penalty(current_model, *args): - current_model.set_frequency_penalty(*args) - -def set_logit_bias(current_model, *args): - current_model.set_logit_bias(*args) - -def set_user_identifier(current_model, *args): - current_model.set_user_identifier(*args) - -def set_single_turn(current_model, *args): - current_model.set_single_turn(*args) - -def handle_file_upload(current_model, *args): - return current_model.handle_file_upload(*args) - -def handle_summarize_index(current_model, *args): - return current_model.summarize_index(*args) - -def like(current_model, *args): - return current_model.like(*args) - -def dislike(current_model, *args): - return current_model.dislike(*args) - - -def count_token(input_str): - encoding = tiktoken.get_encoding("cl100k_base") - if type(input_str) == dict: - input_str = f"role: {input_str['role']}, content: {input_str['content']}" - length = len(encoding.encode(input_str)) - return length - - -def markdown_to_html_with_syntax_highlight(md_str): # deprecated - def replacer(match): - lang = match.group(1) or "text" - code = match.group(2) - - try: - lexer = get_lexer_by_name(lang, stripall=True) - except ValueError: - lexer = get_lexer_by_name("text", stripall=True) - - formatter = HtmlFormatter() - highlighted_code = highlight(code, lexer, formatter) - - return f'
    {highlighted_code}
    ' - - code_block_pattern = r"```(\w+)?\n([\s\S]+?)\n```" - md_str = re.sub(code_block_pattern, replacer, md_str, flags=re.MULTILINE) - - html_str = markdown(md_str) - return html_str - - -def normalize_markdown(md_text: str) -> str: # deprecated - lines = md_text.split("\n") - normalized_lines = [] - inside_list = False - - for i, line in enumerate(lines): - if re.match(r"^(\d+\.|-|\*|\+)\s", line.strip()): - if not inside_list and i > 0 and lines[i - 1].strip() != "": - normalized_lines.append("") - inside_list = True - normalized_lines.append(line) - elif inside_list and line.strip() == "": - if i < len(lines) - 1 and not re.match( - r"^(\d+\.|-|\*|\+)\s", lines[i + 1].strip() - ): - normalized_lines.append(line) - continue - else: - inside_list = False - normalized_lines.append(line) - - return "\n".join(normalized_lines) - - -def convert_mdtext(md_text): # deprecated - code_block_pattern = re.compile(r"```(.*?)(?:```|$)", re.DOTALL) - inline_code_pattern = re.compile(r"`(.*?)`", re.DOTALL) - code_blocks = code_block_pattern.findall(md_text) - non_code_parts = code_block_pattern.split(md_text)[::2] - - result = [] - raw = f'
    {html.escape(md_text)}
    ' - for non_code, code in zip(non_code_parts, code_blocks + [""]): - if non_code.strip(): - non_code = normalize_markdown(non_code) - result.append(markdown(non_code, extensions=["tables"])) - if code.strip(): - # _, code = detect_language(code) # 暂时去除代码高亮功能,因为在大段代码的情况下会出现问题 - # code = code.replace("\n\n", "\n") # 暂时去除代码中的空行,因为在大段代码的情况下会出现问题 - code = f"\n```{code}\n\n```" - code = markdown_to_html_with_syntax_highlight(code) - result.append(code) - result = "".join(result) - output = f'
    {result}
    ' - output += raw - output += ALREADY_CONVERTED_MARK - return output - - -def clip_rawtext(chat_message, need_escape=True): - # first, clip hr line - hr_pattern = r'\n\n
    (.*?)' - hr_match = re.search(hr_pattern, chat_message, re.DOTALL) - message_clipped = chat_message[:hr_match.start()] if hr_match else chat_message - # second, avoid agent-prefix being escaped - agent_prefix_pattern = r'

    (.*?)<\/p>' - agent_matches = re.findall(agent_prefix_pattern, message_clipped) - final_message = "" - if agent_matches: - agent_parts = re.split(agent_prefix_pattern, message_clipped) - for i, part in enumerate(agent_parts): - if i % 2 == 0: - final_message += escape_markdown(part) if need_escape else part - else: - final_message += f'

    {part}

    ' - else: - final_message = escape_markdown(message_clipped) if need_escape else message_clipped - return final_message - - -def convert_bot_before_marked(chat_message): - """ - 注意不能给输出加缩进, 否则会被marked解析成代码块 - """ - if '
    ' in chat_message: - return chat_message - else: - raw = f'
    {clip_rawtext(chat_message)}
    ' - # really_raw = f'{START_OF_OUTPUT_MARK}
    {clip_rawtext(chat_message, need_escape=False)}\n
    {END_OF_OUTPUT_MARK}' - - code_block_pattern = re.compile(r"```(.*?)(?:```|$)", re.DOTALL) - code_blocks = code_block_pattern.findall(chat_message) - non_code_parts = code_block_pattern.split(chat_message)[::2] - result = [] - for non_code, code in zip(non_code_parts, code_blocks + [""]): - if non_code.strip(): - result.append(non_code) - if code.strip(): - code = f"\n```{code}\n```" - result.append(code) - result = "".join(result) - md = f'
    \n\n{result}\n
    ' - return raw + md - -def convert_user_before_marked(chat_message): - if '
    ' in chat_message: - return chat_message - else: - return f'
    {escape_markdown(chat_message)}
    ' - -def escape_markdown(text): - """ - Escape Markdown special characters to HTML-safe equivalents. - """ - escape_chars = { - # ' ': ' ', - '_': '_', - '*': '*', - '[': '[', - ']': ']', - '(': '(', - ')': ')', - '{': '{', - '}': '}', - '#': '#', - '+': '+', - '-': '-', - '.': '.', - '!': '!', - '`': '`', - '>': '>', - '<': '<', - '|': '|', - '$': '$', - ':': ':', - '\n': '
    ', - } - text = text.replace(' ', '    ') - return ''.join(escape_chars.get(c, c) for c in text) - - -def convert_asis(userinput): # deprecated - return ( - f'

    {html.escape(userinput)}

    ' - + ALREADY_CONVERTED_MARK - ) - - -def detect_converted_mark(userinput): # deprecated - try: - if userinput.endswith(ALREADY_CONVERTED_MARK): - return True - else: - return False - except: - return True - - -def detect_language(code): # deprecated - if code.startswith("\n"): - first_line = "" - else: - first_line = code.strip().split("\n", 1)[0] - language = first_line.lower() if first_line else "" - code_without_language = code[len(first_line) :].lstrip() if first_line else code - return language, code_without_language - - -def construct_text(role, text): - return {"role": role, "content": text} - - -def construct_user(text): - return construct_text("user", text) - - -def construct_system(text): - return construct_text("system", text) - - -def construct_assistant(text): - return construct_text("assistant", text) - - -def save_file(filename, system, history, chatbot, user_name): - logging.debug(f"{user_name} 保存对话历史中……") - os.makedirs(os.path.join(HISTORY_DIR, user_name), exist_ok=True) - if filename.endswith(".json"): - json_s = {"system": system, "history": history, "chatbot": chatbot} - if "/" in filename or "\\" in filename: - history_file_path = filename - else: - history_file_path = os.path.join(HISTORY_DIR, user_name, filename) - with open(history_file_path, "w", encoding='utf-8') as f: - json.dump(json_s, f, ensure_ascii=False) - elif filename.endswith(".md"): - md_s = f"system: \n- {system} \n" - for data in history: - md_s += f"\n{data['role']}: \n- {data['content']} \n" - with open(os.path.join(HISTORY_DIR, user_name, filename), "w", encoding="utf8") as f: - f.write(md_s) - logging.debug(f"{user_name} 保存对话历史完毕") - return os.path.join(HISTORY_DIR, user_name, filename) - - -def sorted_by_pinyin(list): - return sorted(list, key=lambda char: lazy_pinyin(char)[0][0]) - - -def get_file_names(dir, plain=False, filetypes=[".json"]): - logging.debug(f"获取文件名列表,目录为{dir},文件类型为{filetypes},是否为纯文本列表{plain}") - files = [] - try: - for type in filetypes: - files += [f for f in os.listdir(dir) if f.endswith(type)] - except FileNotFoundError: - files = [] - files = sorted_by_pinyin(files) - if files == []: - files = [""] - logging.debug(f"files are:{files}") - if plain: - return files - else: - return gr.Dropdown.update(choices=files) - - -def get_history_names(plain=False, user_name=""): - logging.debug(f"从用户 {user_name} 中获取历史记录文件名列表") - if user_name == "" and hide_history_when_not_logged_in: - return "" - else: - return get_file_names(os.path.join(HISTORY_DIR, user_name), plain) - - -def load_template(filename, mode=0): - logging.debug(f"加载模板文件{filename},模式为{mode}(0为返回字典和下拉菜单,1为返回下拉菜单,2为返回字典)") - lines = [] - if filename.endswith(".json"): - with open(os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8") as f: - lines = json.load(f) - lines = [[i["act"], i["prompt"]] for i in lines] - else: - with open( - os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8" - ) as csvfile: - reader = csv.reader(csvfile) - lines = list(reader) - lines = lines[1:] - if mode == 1: - return sorted_by_pinyin([row[0] for row in lines]) - elif mode == 2: - return {row[0]: row[1] for row in lines} - else: - choices = sorted_by_pinyin([row[0] for row in lines]) - return {row[0]: row[1] for row in lines}, gr.Dropdown.update( - choices=choices - ) - - -def get_template_names(plain=False): - logging.debug("获取模板文件名列表") - return get_file_names(TEMPLATES_DIR, plain, filetypes=[".csv", "json"]) - - -def get_template_content(templates, selection, original_system_prompt): - logging.debug(f"应用模板中,选择为{selection},原始系统提示为{original_system_prompt}") - try: - return templates[selection] - except: - return original_system_prompt - - -def reset_textbox(): - logging.debug("重置文本框") - return gr.update(value="") - - -def reset_default(): - default_host = shared.state.reset_api_host() - retrieve_proxy("") - return gr.update(value=default_host), gr.update(value=""), "API-Host 和代理已重置" - - -def change_api_host(host): - shared.state.set_api_host(host) - msg = f"API-Host更改为了{host}" - logging.info(msg) - return msg - - -def change_proxy(proxy): - retrieve_proxy(proxy) - os.environ["HTTPS_PROXY"] = proxy - msg = f"代理更改为了{proxy}" - logging.info(msg) - return msg - - -def hide_middle_chars(s): - if s is None: - return "" - if len(s) <= 8: - return s - else: - head = s[:4] - tail = s[-4:] - hidden = "*" * (len(s) - 8) - return head + hidden + tail - - -def submit_key(key): - key = key.strip() - msg = f"API密钥更改为了{hide_middle_chars(key)}" - logging.info(msg) - return key, msg - - -def replace_today(prompt): - today = datetime.datetime.today().strftime("%Y-%m-%d") - return prompt.replace("{current_date}", today) - - -def get_geoip(): - try: - with retrieve_proxy(): - response = requests.get("https://ipapi.co/json/", timeout=5) - data = response.json() - except: - data = {"error": True, "reason": "连接ipapi失败"} - if "error" in data.keys(): - logging.warning(f"无法获取IP地址信息。\n{data}") - if data["reason"] == "RateLimited": - return ( - i18n("您的IP区域:未知。") - ) - else: - return i18n("获取IP地理位置失败。原因:") + f"{data['reason']}" + i18n("。你仍然可以使用聊天功能。") - else: - country = data["country_name"] - if country == "China": - text = "**您的IP区域:中国。请立即检查代理设置,在不受支持的地区使用API可能导致账号被封禁。**" - else: - text = i18n("您的IP区域:") + f"{country}。" - logging.info(text) - return text - - -def find_n(lst, max_num): - n = len(lst) - total = sum(lst) - - if total < max_num: - return n - - for i in range(len(lst)): - if total - lst[i] < max_num: - return n - i - 1 - total = total - lst[i] - return 1 - - -def start_outputing(): - logging.debug("显示取消按钮,隐藏发送按钮") - return gr.Button.update(visible=False), gr.Button.update(visible=True) - - -def end_outputing(): - return ( - gr.Button.update(visible=True), - gr.Button.update(visible=False), - ) - - -def cancel_outputing(): - logging.info("中止输出……") - shared.state.interrupt() - - -def transfer_input(inputs): - # 一次性返回,降低延迟 - textbox = reset_textbox() - outputing = start_outputing() - return ( - inputs, - gr.update(value=""), - gr.Button.update(visible=False), - gr.Button.update(visible=True), - ) - - -def update_chuanhu(): - from .repo import background_update - - print("[Updater] Trying to update...") - update_status = background_update() - if update_status == "success": - logging.info("Successfully updated, restart needed") - status = 'success' - return gr.Markdown.update(value=i18n("更新成功,请重启本程序")+status) - else: - status = 'failure' - return gr.Markdown.update(value=i18n("更新失败,请尝试[手动更新](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#手动更新)")+status) - - -def add_source_numbers(lst, source_name = "Source", use_source = True): - if use_source: - return [f'[{idx+1}]\t "{item[0]}"\n{source_name}: {item[1]}' for idx, item in enumerate(lst)] - else: - return [f'[{idx+1}]\t "{item}"' for idx, item in enumerate(lst)] - -def add_details(lst): - nodes = [] - for index, txt in enumerate(lst): - brief = txt[:25].replace("\n", "") - nodes.append( - f"
    {brief}...

    {txt}

    " - ) - return nodes - - -def sheet_to_string(sheet, sheet_name = None): - result = [] - for index, row in sheet.iterrows(): - row_string = "" - for column in sheet.columns: - row_string += f"{column}: {row[column]}, " - row_string = row_string.rstrip(", ") - row_string += "." - result.append(row_string) - return result - -def excel_to_string(file_path): - # 读取Excel文件中的所有工作表 - excel_file = pd.read_excel(file_path, engine='openpyxl', sheet_name=None) - - # 初始化结果字符串 - result = [] - - # 遍历每一个工作表 - for sheet_name, sheet_data in excel_file.items(): - - # 处理当前工作表并添加到结果字符串 - result += sheet_to_string(sheet_data, sheet_name=sheet_name) - - - return result - -def get_last_day_of_month(any_day): - # The day 28 exists in every month. 4 days later, it's always next month - next_month = any_day.replace(day=28) + datetime.timedelta(days=4) - # subtracting the number of the current day brings us back one month - return next_month - datetime.timedelta(days=next_month.day) - -def get_model_source(model_name, alternative_source): - if model_name == "gpt2-medium": - return "https://huggingface.co/gpt2-medium" - -def refresh_ui_elements_on_load(current_model, selected_model_name, user_name): - current_model.set_user_identifier(user_name) - return toggle_like_btn_visibility(selected_model_name), *current_model.auto_load() - -def toggle_like_btn_visibility(selected_model_name): - if selected_model_name == "xmchat": - return gr.update(visible=True) - else: - return gr.update(visible=False) - -def new_auto_history_filename(dirname): - latest_file = get_latest_filepath(dirname) - if latest_file: - with open(os.path.join(dirname, latest_file), 'r', encoding="utf-8") as f: - if len(f.read()) == 0: - return latest_file - now = datetime.datetime.now().strftime('%Y-%m-%d_%H-%M-%S') - return f'{now}.json' - -def get_latest_filepath(dirname): - pattern = re.compile(r'\d{4}-\d{2}-\d{2}_\d{2}-\d{2}-\d{2}') - latest_time = None - latest_file = None - for filename in os.listdir(dirname): - if os.path.isfile(os.path.join(dirname, filename)): - match = pattern.search(filename) - if match and match.group(0) == filename[:19]: - time_str = filename[:19] - filetime = datetime.datetime.strptime(time_str, '%Y-%m-%d_%H-%M-%S') - if not latest_time or filetime > latest_time: - latest_time = filetime - latest_file = filename - return latest_file - -def get_history_filepath(username): - dirname = os.path.join(HISTORY_DIR, username) - os.makedirs(dirname, exist_ok=True) - latest_file = get_latest_filepath(dirname) - if not latest_file: - latest_file = new_auto_history_filename(dirname) - - latest_file = os.path.join(dirname, latest_file) - return latest_file - -def beautify_err_msg(err_msg): - if "insufficient_quota" in err_msg: - return i18n("剩余配额不足,[进一步了解](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98#you-exceeded-your-current-quota-please-check-your-plan-and-billing-details)") - if "The model: gpt-4 does not exist" in err_msg: - return i18n("你没有权限访问 GPT4,[进一步了解](https://github.com/GaiZhenbiao/ChuanhuChatGPT/issues/843)") - if "Resource not found" in err_msg: - return i18n("请查看 config_example.json,配置 Azure OpenAI") - return err_msg - -def auth_from_conf(username, password): - try: - with open("config.json", encoding="utf-8") as f: - conf = json.load(f) - usernames, passwords = [i[0] for i in conf["users"]], [i[1] for i in conf["users"]] - if username in usernames: - if passwords[usernames.index(username)] == password: - return True - return False - except: - return False - -def get_file_hash(file_src=None, file_paths=None): - if file_src: - file_paths = [x.name for x in file_src] - file_paths.sort(key=lambda x: os.path.basename(x)) - - md5_hash = hashlib.md5() - for file_path in file_paths: - with open(file_path, "rb") as f: - while chunk := f.read(8192): - md5_hash.update(chunk) - - return md5_hash.hexdigest() diff --git a/spaces/Kaori1707/Depth-estimation/dpt/blocks.py b/spaces/Kaori1707/Depth-estimation/dpt/blocks.py deleted file mode 100644 index 46b3fe3fffe17cae3c885491937bbb1f09a21e9d..0000000000000000000000000000000000000000 --- a/spaces/Kaori1707/Depth-estimation/dpt/blocks.py +++ /dev/null @@ -1,383 +0,0 @@ -import torch -import torch.nn as nn - -from .vit import ( - _make_pretrained_vitb_rn50_384, - _make_pretrained_vitl16_384, - _make_pretrained_vitb16_384, - forward_vit, -) - - -def _make_encoder( - backbone, - features, - use_pretrained, - groups=1, - expand=False, - exportable=True, - hooks=None, - use_vit_only=False, - use_readout="ignore", - enable_attention_hooks=False, -): - if backbone == "vitl16_384": - pretrained = _make_pretrained_vitl16_384( - use_pretrained, - hooks=hooks, - use_readout=use_readout, - enable_attention_hooks=enable_attention_hooks, - ) - scratch = _make_scratch( - [256, 512, 1024, 1024], features, groups=groups, expand=expand - ) # ViT-L/16 - 85.0% Top1 (backbone) - elif backbone == "vitb_rn50_384": - pretrained = _make_pretrained_vitb_rn50_384( - use_pretrained, - hooks=hooks, - use_vit_only=use_vit_only, - use_readout=use_readout, - enable_attention_hooks=enable_attention_hooks, - ) - scratch = _make_scratch( - [256, 512, 768, 768], features, groups=groups, expand=expand - ) # ViT-H/16 - 85.0% Top1 (backbone) - elif backbone == "vitb16_384": - pretrained = _make_pretrained_vitb16_384( - use_pretrained, - hooks=hooks, - use_readout=use_readout, - enable_attention_hooks=enable_attention_hooks, - ) - scratch = _make_scratch( - [96, 192, 384, 768], features, groups=groups, expand=expand - ) # ViT-B/16 - 84.6% Top1 (backbone) - elif backbone == "resnext101_wsl": - pretrained = _make_pretrained_resnext101_wsl(use_pretrained) - scratch = _make_scratch( - [256, 512, 1024, 2048], features, groups=groups, expand=expand - ) # efficientnet_lite3 - else: - print(f"Backbone '{backbone}' not implemented") - assert False - - return pretrained, scratch - - -def _make_scratch(in_shape, out_shape, groups=1, expand=False): - scratch = nn.Module() - - out_shape1 = out_shape - out_shape2 = out_shape - out_shape3 = out_shape - out_shape4 = out_shape - if expand == True: - out_shape1 = out_shape - out_shape2 = out_shape * 2 - out_shape3 = out_shape * 4 - out_shape4 = out_shape * 8 - - scratch.layer1_rn = nn.Conv2d( - in_shape[0], - out_shape1, - kernel_size=3, - stride=1, - padding=1, - bias=False, - groups=groups, - ) - scratch.layer2_rn = nn.Conv2d( - in_shape[1], - out_shape2, - kernel_size=3, - stride=1, - padding=1, - bias=False, - groups=groups, - ) - scratch.layer3_rn = nn.Conv2d( - in_shape[2], - out_shape3, - kernel_size=3, - stride=1, - padding=1, - bias=False, - groups=groups, - ) - scratch.layer4_rn = nn.Conv2d( - in_shape[3], - out_shape4, - kernel_size=3, - stride=1, - padding=1, - bias=False, - groups=groups, - ) - - return scratch - - -def _make_resnet_backbone(resnet): - pretrained = nn.Module() - pretrained.layer1 = nn.Sequential( - resnet.conv1, resnet.bn1, resnet.relu, resnet.maxpool, resnet.layer1 - ) - - pretrained.layer2 = resnet.layer2 - pretrained.layer3 = resnet.layer3 - pretrained.layer4 = resnet.layer4 - - return pretrained - - -def _make_pretrained_resnext101_wsl(use_pretrained): - resnet = torch.hub.load("facebookresearch/WSL-Images", "resnext101_32x8d_wsl") - return _make_resnet_backbone(resnet) - - -class Interpolate(nn.Module): - """Interpolation module.""" - - def __init__(self, scale_factor, mode, align_corners=False): - """Init. - - Args: - scale_factor (float): scaling - mode (str): interpolation mode - """ - super(Interpolate, self).__init__() - - self.interp = nn.functional.interpolate - self.scale_factor = scale_factor - self.mode = mode - self.align_corners = align_corners - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: interpolated data - """ - - x = self.interp( - x, - scale_factor=self.scale_factor, - mode=self.mode, - align_corners=self.align_corners, - ) - - return x - - -class ResidualConvUnit(nn.Module): - """Residual convolution module.""" - - def __init__(self, features): - """Init. - - Args: - features (int): number of features - """ - super().__init__() - - self.conv1 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True - ) - - self.conv2 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True - ) - - self.relu = nn.ReLU(inplace=True) - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: output - """ - out = self.relu(x) - out = self.conv1(out) - out = self.relu(out) - out = self.conv2(out) - - return out + x - - -class FeatureFusionBlock(nn.Module): - """Feature fusion block.""" - - def __init__(self, features): - """Init. - - Args: - features (int): number of features - """ - super(FeatureFusionBlock, self).__init__() - - self.resConfUnit1 = ResidualConvUnit(features) - self.resConfUnit2 = ResidualConvUnit(features) - - def forward(self, *xs): - """Forward pass. - - Returns: - tensor: output - """ - output = xs[0] - - if len(xs) == 2: - output += self.resConfUnit1(xs[1]) - - output = self.resConfUnit2(output) - - output = nn.functional.interpolate( - output, scale_factor=2, mode="bilinear", align_corners=True - ) - - return output - - -class ResidualConvUnit_custom(nn.Module): - """Residual convolution module.""" - - def __init__(self, features, activation, bn): - """Init. - - Args: - features (int): number of features - """ - super().__init__() - - self.bn = bn - - self.groups = 1 - - self.conv1 = nn.Conv2d( - features, - features, - kernel_size=3, - stride=1, - padding=1, - bias=not self.bn, - groups=self.groups, - ) - - self.conv2 = nn.Conv2d( - features, - features, - kernel_size=3, - stride=1, - padding=1, - bias=not self.bn, - groups=self.groups, - ) - - if self.bn == True: - self.bn1 = nn.BatchNorm2d(features) - self.bn2 = nn.BatchNorm2d(features) - - self.activation = activation - - self.skip_add = nn.quantized.FloatFunctional() - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: output - """ - - out = self.activation(x) - out = self.conv1(out) - if self.bn == True: - out = self.bn1(out) - - out = self.activation(out) - out = self.conv2(out) - if self.bn == True: - out = self.bn2(out) - - if self.groups > 1: - out = self.conv_merge(out) - - return self.skip_add.add(out, x) - - # return out + x - - -class FeatureFusionBlock_custom(nn.Module): - """Feature fusion block.""" - - def __init__( - self, - features, - activation, - deconv=False, - bn=False, - expand=False, - align_corners=True, - ): - """Init. - - Args: - features (int): number of features - """ - super(FeatureFusionBlock_custom, self).__init__() - - self.deconv = deconv - self.align_corners = align_corners - - self.groups = 1 - - self.expand = expand - out_features = features - if self.expand == True: - out_features = features // 2 - - self.out_conv = nn.Conv2d( - features, - out_features, - kernel_size=1, - stride=1, - padding=0, - bias=True, - groups=1, - ) - - self.resConfUnit1 = ResidualConvUnit_custom(features, activation, bn) - self.resConfUnit2 = ResidualConvUnit_custom(features, activation, bn) - - self.skip_add = nn.quantized.FloatFunctional() - - def forward(self, *xs): - """Forward pass. - - Returns: - tensor: output - """ - output = xs[0] - - if len(xs) == 2: - res = self.resConfUnit1(xs[1]) - output = self.skip_add.add(output, res) - # output += res - - output = self.resConfUnit2(output) - - output = nn.functional.interpolate( - output, scale_factor=2, mode="bilinear", align_corners=self.align_corners - ) - - output = self.out_conv(output) - - return output diff --git a/spaces/Kevin676/Gpt4All/README.md b/spaces/Kevin676/Gpt4All/README.md deleted file mode 100644 index 96d5eb318373879004f12ca261d779d467352d43..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/Gpt4All/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Gpt4All -emoji: 🐨 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -duplicated_from: BilalSardar/Gpt4All ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Kimata/Sanskrit-TTS/indic_nlp_library/indicnlp/transliterate/__init__.py b/spaces/Kimata/Sanskrit-TTS/indic_nlp_library/indicnlp/transliterate/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Kimata/multimodal_deepfake_detection/inference.py b/spaces/Kimata/multimodal_deepfake_detection/inference.py deleted file mode 100644 index d1e052dc2d97a9975de04b52b694173d49f3aa2d..0000000000000000000000000000000000000000 --- a/spaces/Kimata/multimodal_deepfake_detection/inference.py +++ /dev/null @@ -1,211 +0,0 @@ -import os -import cv2 -import torch -import argparse -import numpy as np -import torch.nn as nn -from models.TMC import ETMC -from models import image - -#Set random seed for reproducibility. -torch.manual_seed(42) - - -# Define the audio_args dictionary -audio_args = { - 'nb_samp': 64600, - 'first_conv': 1024, - 'in_channels': 1, - 'filts': [20, [20, 20], [20, 128], [128, 128]], - 'blocks': [2, 4], - 'nb_fc_node': 1024, - 'gru_node': 1024, - 'nb_gru_layer': 3, -} - - -def get_args(parser): - parser.add_argument("--batch_size", type=int, default=8) - parser.add_argument("--data_dir", type=str, default="datasets/train/fakeavceleb*") - parser.add_argument("--LOAD_SIZE", type=int, default=256) - parser.add_argument("--FINE_SIZE", type=int, default=224) - parser.add_argument("--dropout", type=float, default=0.2) - parser.add_argument("--gradient_accumulation_steps", type=int, default=1) - parser.add_argument("--hidden", nargs="*", type=int, default=[]) - parser.add_argument("--hidden_sz", type=int, default=768) - parser.add_argument("--img_embed_pool_type", type=str, default="avg", choices=["max", "avg"]) - parser.add_argument("--img_hidden_sz", type=int, default=1024) - parser.add_argument("--include_bn", type=int, default=True) - parser.add_argument("--lr", type=float, default=1e-4) - parser.add_argument("--lr_factor", type=float, default=0.3) - parser.add_argument("--lr_patience", type=int, default=10) - parser.add_argument("--max_epochs", type=int, default=500) - parser.add_argument("--n_workers", type=int, default=12) - parser.add_argument("--name", type=str, default="MMDF") - parser.add_argument("--num_image_embeds", type=int, default=1) - parser.add_argument("--patience", type=int, default=20) - parser.add_argument("--savedir", type=str, default="./savepath/") - parser.add_argument("--seed", type=int, default=1) - parser.add_argument("--n_classes", type=int, default=2) - parser.add_argument("--annealing_epoch", type=int, default=10) - parser.add_argument("--device", type=str, default='cpu') - parser.add_argument("--pretrained_image_encoder", type=bool, default = False) - parser.add_argument("--freeze_image_encoder", type=bool, default = False) - parser.add_argument("--pretrained_audio_encoder", type = bool, default=False) - parser.add_argument("--freeze_audio_encoder", type = bool, default = False) - parser.add_argument("--augment_dataset", type = bool, default = True) - - for key, value in audio_args.items(): - parser.add_argument(f"--{key}", type=type(value), default=value) - -def model_summary(args): - '''Prints the model summary.''' - model = ETMC(args) - - for name, layer in model.named_modules(): - print(name, layer) - -def load_multimodal_model(args): - '''Load multimodal model''' - model = ETMC(args) - ckpt = torch.load('checkpoints/model_best.pt', map_location = torch.device('cpu')) - model.load_state_dict(ckpt,strict = False) - model.eval() - return model - -def load_img_modality_model(args): - '''Loads image modality model.''' - rgb_encoder = image.ImageEncoder(args) - ckpt = torch.load('checkpoints/model_best.pt', map_location = torch.device('cpu')) - rgb_encoder.load_state_dict(ckpt,strict = False) - rgb_encoder.eval() - return rgb_encoder - -def load_spec_modality_model(args): - spec_encoder = image.RawNet(args) - ckpt = torch.load('checkpoints/model_best.pt', map_location = torch.device('cpu')) - spec_encoder.load_state_dict(ckpt,strict = False) - spec_encoder.eval() - return spec_encoder - - -#Load models. -parser = argparse.ArgumentParser(description="Train Models") -get_args(parser) -args, remaining_args = parser.parse_known_args() -assert remaining_args == [], remaining_args - -multimodal = load_multimodal_model(args) -spec_model = load_spec_modality_model(args) -img_model = load_img_modality_model(args) - - -def preprocess_img(face): - face = face / 255 - face = cv2.resize(face, (256, 256)) - face = face.transpose(2, 0, 1) #(W, H, C) -> (C, W, H) - face_pt = torch.unsqueeze(torch.Tensor(face), dim = 0) - return face_pt - -def preprocess_audio(audio_file): - audio_pt = torch.unsqueeze(torch.Tensor(audio_file), dim = 0) - return audio_pt - -def deepfakes_spec_predict(input_audio): - x, _ = input_audio - audio = preprocess_audio(x) - spec_grads = spec_model.forward(audio) - multimodal_grads = multimodal.spec_depth[0].forward(spec_grads) - - out = nn.Softmax()(multimodal_grads) - max = torch.argmax(out, dim = -1) #Index of the max value in the tensor. - max_value = out[max] #Actual value of the tensor. - max_value = np.argmax(out[max].detach().numpy()) - - if max_value > 0.5: - preds = round(100 - (max_value*100), 3) - text2 = f"The audio is REAL." - - else: - preds = round(max_value*100, 3) - text2 = f"The audio is FAKE." - - return text2 - -def deepfakes_image_predict(input_image): - face = preprocess_img(input_image) - - img_grads = img_model.forward(face) - multimodal_grads = multimodal.clf_rgb[0].forward(img_grads) - - out = nn.Softmax()(multimodal_grads) - max = torch.argmax(out, dim=-1) #Index of the max value in the tensor. - max = max.cpu().detach().numpy() - max_value = out[max] #Actual value of the tensor. - max_value = np.argmax(out[max].detach().numpy()) - - if max_value > 0.5: - preds = round(100 - (max_value*100), 3) - text2 = f"The image is REAL." - - else: - preds = round(max_value*100, 3) - text2 = f"The image is FAKE." - - return text2 - - -def preprocess_video(input_video, n_frames = 5): - v_cap = cv2.VideoCapture(input_video) - v_len = int(v_cap.get(cv2.CAP_PROP_FRAME_COUNT)) - - # Pick 'n_frames' evenly spaced frames to sample - if n_frames is None: - sample = np.arange(0, v_len) - else: - sample = np.linspace(0, v_len - 1, n_frames).astype(int) - - #Loop through frames. - frames = [] - for j in range(v_len): - success = v_cap.grab() - if j in sample: - # Load frame - success, frame = v_cap.retrieve() - if not success: - continue - frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) - frame = preprocess_img(frame) - frames.append(frame) - v_cap.release() - return frames - - -def deepfakes_video_predict(input_video): - '''Perform inference on a video.''' - video_frames = preprocess_video(input_video) - - real_grads = [] - fake_grads = [] - - for face in video_frames: - img_grads = img_model.forward(face) - multimodal_grads = multimodal.clf_rgb[0].forward(img_grads) - - out = nn.Softmax()(multimodal_grads) - real_grads.append(out.cpu().detach().numpy()[0]) - print(f"Video out tensor shape is: {out.shape}, {out}") - - fake_grads.append(out.cpu().detach().numpy()[0]) - - real_grads_mean = np.mean(real_grads) - fake_grads_mean = np.mean(fake_grads) - - if real_grads_mean > fake_grads_mean: - res = round(real_grads_mean * 100, 3) - text = f"The video is REAL." - else: - res = round(100 - (real_grads_mean * 100), 3) - text = f"The video is FAKE." - return text - diff --git a/spaces/LAKSJAKLCNDWNVWHEFKJH/asdfghjkl/README.md b/spaces/LAKSJAKLCNDWNVWHEFKJH/asdfghjkl/README.md deleted file mode 100644 index f4200ec1cc71da2948855cdb2fb39c6115bb81e5..0000000000000000000000000000000000000000 --- a/spaces/LAKSJAKLCNDWNVWHEFKJH/asdfghjkl/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Asdfghjkl -emoji: 💻 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Lamai/LAMAIGPT/autogpt/llm_utils.py b/spaces/Lamai/LAMAIGPT/autogpt/llm_utils.py deleted file mode 100644 index 821820ffab07be2753cf385ff1de77820e4206ee..0000000000000000000000000000000000000000 --- a/spaces/Lamai/LAMAIGPT/autogpt/llm_utils.py +++ /dev/null @@ -1,172 +0,0 @@ -from __future__ import annotations - -import time -from ast import List - -import openai -from colorama import Fore, Style -from openai.error import APIError, RateLimitError - -from autogpt.config import Config -from autogpt.logs import logger - -CFG = Config() - -openai.api_key = CFG.openai_api_key - - -def call_ai_function( - function: str, args: list, description: str, model: str | None = None -) -> str: - """Call an AI function - - This is a magic function that can do anything with no-code. See - https://github.com/Torantulino/AI-Functions for more info. - - Args: - function (str): The function to call - args (list): The arguments to pass to the function - description (str): The description of the function - model (str, optional): The model to use. Defaults to None. - - Returns: - str: The response from the function - """ - if model is None: - model = CFG.smart_llm_model - # For each arg, if any are None, convert to "None": - args = [str(arg) if arg is not None else "None" for arg in args] - # parse args to comma separated string - args = ", ".join(args) - messages = [ - { - "role": "system", - "content": f"You are now the following python function: ```# {description}" - f"\n{function}```\n\nOnly respond with your `return` value.", - }, - {"role": "user", "content": args}, - ] - - return create_chat_completion(model=model, messages=messages, temperature=0) - - -# Overly simple abstraction until we create something better -# simple retry mechanism when getting a rate error or a bad gateway -def create_chat_completion( - messages: list, # type: ignore - model: str | None = None, - temperature: float = CFG.temperature, - max_tokens: int | None = None, -) -> str: - """Create a chat completion using the OpenAI API - - Args: - messages (list[dict[str, str]]): The messages to send to the chat completion - model (str, optional): The model to use. Defaults to None. - temperature (float, optional): The temperature to use. Defaults to 0.9. - max_tokens (int, optional): The max tokens to use. Defaults to None. - - Returns: - str: The response from the chat completion - """ - response = None - num_retries = 10 - warned_user = False - if CFG.debug_mode: - print( - Fore.GREEN - + f"Creating chat completion with model {model}, temperature {temperature}," - f" max_tokens {max_tokens}" + Fore.RESET - ) - for attempt in range(num_retries): - backoff = 2 ** (attempt + 2) - try: - if CFG.use_azure: - response = openai.ChatCompletion.create( - deployment_id=CFG.get_azure_deployment_id_for_model(model), - model=model, - messages=messages, - temperature=temperature, - max_tokens=max_tokens, - ) - else: - response = openai.ChatCompletion.create( - model=model, - messages=messages, - temperature=temperature, - max_tokens=max_tokens, - ) - break - except RateLimitError: - if CFG.debug_mode: - print( - Fore.RED + "Error: ", - f"Reached rate limit, passing..." + Fore.RESET, - ) - if not warned_user: - logger.double_check( - f"Please double check that you have setup a {Fore.CYAN + Style.BRIGHT}PAID{Style.RESET_ALL} OpenAI API Account. " - + f"You can read more here: {Fore.CYAN}https://github.com/Significant-Gravitas/Auto-GPT#openai-api-keys-configuration{Fore.RESET}" - ) - warned_user = True - except APIError as e: - if e.http_status == 502: - pass - else: - raise - if attempt == num_retries - 1: - raise - if CFG.debug_mode: - print( - Fore.RED + "Error: ", - f"API Bad gateway. Waiting {backoff} seconds..." + Fore.RESET, - ) - time.sleep(backoff) - if response is None: - logger.typewriter_log( - "FAILED TO GET RESPONSE FROM OPENAI", - Fore.RED, - "Auto-GPT has failed to get a response from OpenAI's services. " - + f"Try running Auto-GPT again, and if the problem the persists try running it with `{Fore.CYAN}--debug{Fore.RESET}`.", - ) - logger.double_check() - if CFG.debug_mode: - raise RuntimeError(f"Failed to get response after {num_retries} retries") - else: - quit(1) - - return response.choices[0].message["content"] - - -def create_embedding_with_ada(text) -> list: - """Create an embedding with text-ada-002 using the OpenAI SDK""" - num_retries = 10 - for attempt in range(num_retries): - backoff = 2 ** (attempt + 2) - try: - if CFG.use_azure: - return openai.Embedding.create( - input=[text], - engine=CFG.get_azure_deployment_id_for_model( - "text-embedding-ada-002" - ), - )["data"][0]["embedding"] - else: - return openai.Embedding.create( - input=[text], model="text-embedding-ada-002" - )["data"][0]["embedding"] - except RateLimitError: - pass - except APIError as e: - if e.http_status == 502: - pass - else: - raise - if attempt == num_retries - 1: - raise - if CFG.debug_mode: - print( - Fore.RED + "Error: ", - f"API Bad gateway. Waiting {backoff} seconds..." + Fore.RESET, - ) - time.sleep(backoff) diff --git a/spaces/Lianjd/stock_dashboard/backtrader/analyzers/pyfolio.py b/spaces/Lianjd/stock_dashboard/backtrader/analyzers/pyfolio.py deleted file mode 100644 index 2a385861816aca7ee460b2cc18b870e2d8947ffe..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/analyzers/pyfolio.py +++ /dev/null @@ -1,163 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - - -import collections - -import backtrader as bt -from backtrader.utils.py3 import items, iteritems - -from . import TimeReturn, PositionsValue, Transactions, GrossLeverage - - -class PyFolio(bt.Analyzer): - '''This analyzer uses 4 children analyzers to collect data and transforms it - in to a data set compatible with ``pyfolio`` - - Children Analyzer - - - ``TimeReturn`` - - Used to calculate the returns of the global portfolio value - - - ``PositionsValue`` - - Used to calculate the value of the positions per data. It sets the - ``headers`` and ``cash`` parameters to ``True`` - - - ``Transactions`` - - Used to record each transaction on a data (size, price, value). Sets - the ``headers`` parameter to ``True`` - - - ``GrossLeverage`` - - Keeps track of the gross leverage (how much the strategy is invested) - - Params: - These are passed transparently to the children - - - timeframe (default: ``bt.TimeFrame.Days``) - - If ``None`` then the timeframe of the 1st data of the system will be - used - - - compression (default: `1``) - - If ``None`` then the compression of the 1st data of the system will be - used - - Both ``timeframe`` and ``compression`` are set following the default - behavior of ``pyfolio`` which is working with *daily* data and upsample it - to obtaine values like yearly returns. - - Methods: - - - get_analysis - - Returns a dictionary with returns as values and the datetime points for - each return as keys - ''' - params = ( - ('timeframe', bt.TimeFrame.Days), - ('compression', 1) - ) - - def __init__(self): - dtfcomp = dict(timeframe=self.p.timeframe, - compression=self.p.compression) - - self._returns = TimeReturn(**dtfcomp) - self._positions = PositionsValue(headers=True, cash=True) - self._transactions = Transactions(headers=True) - self._gross_lev = GrossLeverage() - - def stop(self): - super(PyFolio, self).stop() - self.rets['returns'] = self._returns.get_analysis() - self.rets['positions'] = self._positions.get_analysis() - self.rets['transactions'] = self._transactions.get_analysis() - self.rets['gross_lev'] = self._gross_lev.get_analysis() - - def get_pf_items(self): - '''Returns a tuple of 4 elements which can be used for further processing with - ``pyfolio`` - - returns, positions, transactions, gross_leverage - - Because the objects are meant to be used as direct input to ``pyfolio`` - this method makes a local import of ``pandas`` to convert the internal - *backtrader* results to *pandas DataFrames* which is the expected input - by, for example, ``pyfolio.create_full_tear_sheet`` - - The method will break if ``pandas`` is not installed - ''' - # keep import local to avoid disturbing installations with no pandas - import pandas - from pandas import DataFrame as DF - - # - # Returns - cols = ['index', 'return'] - returns = DF.from_records(iteritems(self.rets['returns']), - index=cols[0], columns=cols) - returns.index = pandas.to_datetime(returns.index) - returns.index = returns.index.tz_localize('UTC') - rets = returns['return'] - # - # Positions - pss = self.rets['positions'] - ps = [[k] + v[-2:] for k, v in iteritems(pss)] - cols = ps.pop(0) # headers are in the first entry - positions = DF.from_records(ps, index=cols[0], columns=cols) - positions.index = pandas.to_datetime(positions.index) - positions.index = positions.index.tz_localize('UTC') - - # - # Transactions - txss = self.rets['transactions'] - txs = list() - # The transactions have a common key (date) and can potentially happend - # for several assets. The dictionary has a single key and a list of - # lists. Each sublist contains the fields of a transaction - # Hence the double loop to undo the list indirection - for k, v in iteritems(txss): - for v2 in v: - txs.append([k] + v2) - - cols = txs.pop(0) # headers are in the first entry - transactions = DF.from_records(txs, index=cols[0], columns=cols) - transactions.index = pandas.to_datetime(transactions.index) - transactions.index = transactions.index.tz_localize('UTC') - - # Gross Leverage - cols = ['index', 'gross_lev'] - gross_lev = DF.from_records(iteritems(self.rets['gross_lev']), - index=cols[0], columns=cols) - - gross_lev.index = pandas.to_datetime(gross_lev.index) - gross_lev.index = gross_lev.index.tz_localize('UTC') - glev = gross_lev['gross_lev'] - - # Return all together - return rets, positions, transactions, glev diff --git a/spaces/Lianjd/stock_dashboard/backtrader/utils/autodict.py b/spaces/Lianjd/stock_dashboard/backtrader/utils/autodict.py deleted file mode 100644 index 812e3ccb6db82d08d759f07b9fce3bfdd528c77b..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/utils/autodict.py +++ /dev/null @@ -1,145 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - -from collections import OrderedDict, defaultdict - -from .py3 import values as py3lvalues - - -def Tree(): - return defaultdict(Tree) - - -class AutoDictList(dict): - def __missing__(self, key): - value = self[key] = list() - return value - - -class DotDict(dict): - # If the attribut is not found in the usual places try the dict itself - def __getattr__(self, key): - if key.startswith('__'): - return super(DotDict, self).__getattr__(key) - return self[key] - - -class AutoDict(dict): - _closed = False - - def _close(self): - self._closed = True - for key, val in self.items(): - if isinstance(val, (AutoDict, AutoOrderedDict)): - val._close() - - def _open(self): - self._closed = False - - def __missing__(self, key): - if self._closed: - raise KeyError - - value = self[key] = AutoDict() - return value - - def __getattr__(self, key): - if False and key.startswith('_'): - raise AttributeError - - return self[key] - - def __setattr__(self, key, value): - if False and key.startswith('_'): - self.__dict__[key] = value - return - - self[key] = value - - -class AutoOrderedDict(OrderedDict): - _closed = False - - def _close(self): - self._closed = True - for key, val in self.items(): - if isinstance(val, (AutoDict, AutoOrderedDict)): - val._close() - - def _open(self): - self._closed = False - - def __missing__(self, key): - if self._closed: - raise KeyError - - # value = self[key] = type(self)() - value = self[key] = AutoOrderedDict() - return value - - def __getattr__(self, key): - if key.startswith('_'): - raise AttributeError - - return self[key] - - def __setattr__(self, key, value): - if key.startswith('_'): - self.__dict__[key] = value - return - - self[key] = value - - # Define math operations - def __iadd__(self, other): - if type(self) != type(other): - return type(other)() + other - - return self + other - - def __isub__(self, other): - if type(self) != type(other): - return type(other)() - other - - return self - other - - def __imul__(self, other): - if type(self) != type(other): - return type(other)() * other - - return self + other - - def __idiv__(self, other): - if type(self) != type(other): - return type(other)() // other - - return self + other - - def __itruediv__(self, other): - if type(self) != type(other): - return type(other)() / other - - return self + other - - def lvalues(self): - return py3lvalues(self) diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/textrecog/abinet/abinet_vision_only_academic.py b/spaces/Loren/Streamlit_OCR_comparator/configs/textrecog/abinet/abinet_vision_only_academic.py deleted file mode 100644 index 318144d2418c7e77568d4915d72f01882835ba94..0000000000000000000000000000000000000000 --- a/spaces/Loren/Streamlit_OCR_comparator/configs/textrecog/abinet/abinet_vision_only_academic.py +++ /dev/null @@ -1,81 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/schedules/schedule_adam_step_20e.py', - '../../_base_/recog_pipelines/abinet_pipeline.py', - '../../_base_/recog_datasets/toy_data.py' - # '../../_base_/recog_datasets/ST_MJ_alphanumeric_train.py', - # '../../_base_/recog_datasets/academic_test.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline = {{_base_.test_pipeline}} - -# Model -num_chars = 37 -max_seq_len = 26 -label_convertor = dict( - type='ABIConvertor', - dict_type='DICT36', - with_unknown=False, - with_padding=False, - lower=True, -) - -model = dict( - type='ABINet', - backbone=dict(type='ResNetABI'), - encoder=dict( - type='ABIVisionModel', - encoder=dict( - type='TransformerEncoder', - n_layers=3, - n_head=8, - d_model=512, - d_inner=2048, - dropout=0.1, - max_len=8 * 32, - ), - decoder=dict( - type='ABIVisionDecoder', - in_channels=512, - num_channels=64, - attn_height=8, - attn_width=32, - attn_mode='nearest', - use_result='feature', - num_chars=num_chars, - max_seq_len=max_seq_len, - init_cfg=dict(type='Xavier', layer='Conv2d')), - ), - loss=dict( - type='ABILoss', - enc_weight=1.0, - dec_weight=1.0, - fusion_weight=1.0, - num_classes=num_chars), - label_convertor=label_convertor, - max_seq_len=max_seq_len, - iter_size=1) - -data = dict( - samples_per_gpu=192, - workers_per_gpu=8, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline)) - -evaluation = dict(interval=1, metric='acc') diff --git a/spaces/ML-unipi/TermsOfServiceSummarization/app.py b/spaces/ML-unipi/TermsOfServiceSummarization/app.py deleted file mode 100644 index 2e070685176ccfca5e3a6a56507cf86ab0db402c..0000000000000000000000000000000000000000 --- a/spaces/ML-unipi/TermsOfServiceSummarization/app.py +++ /dev/null @@ -1,121 +0,0 @@ -import os -from typing import AnyStr -import nltk -import streamlit as st -from transformers import pipeline, AutoTokenizer -import re - - -def main() -> None: - # header - st.title(":bookmark_tabs: Terms Of Service Summarizer :bookmark_tabs:") - st.markdown("The app aims to extract the main information from Terms Of Conditions, which are often too long and " - "difficult to understand. ") - st.markdown("To test it just copy-paste a Terms Of Conditions in the textarea or select one of the examples that " - "we have prepared for you, then you will see the summary represented as the most important sentences.") - st.markdown("If you want more info in how we built our NLP algorithm check the documentation in the following " - "GitHub repo: :point_right: https://github.com/balditommaso/TermsOfServiceSummarization :point_left:") - st.markdown(":skull_and_crossbones: NOTE :skull_and_crossbones::") - st.markdown("the App is still under development and we do not give any guarantee on the quality of the summaries, " - "so we suggest a careful reading of the document.") - - @st.cache(allow_output_mutation=True, suppress_st_warning=True, show_spinner=False) - def create_pipeline(): - with st.spinner("Loading the model..."): - tos_pipeline = pipeline(task="summarization", - model="ML-unipi/bart-large-tos", - tokenizer="ML-unipi/bart-large-tos", - ) - return tos_pipeline - - def clean_summaries(text: str) -> list: - result = [] - lines = re.split(r'(? None: - st.subheader("Summary :male-detective:") - for sentence in summary_sentences: - st.markdown(f"
  11. {sentence}
  12. ", unsafe_allow_html=True) - - def get_list_files() -> list: - names = [] - for file in os.listdir("./samples/"): - if file.endswith(".txt"): - names.append(file.replace(".txt", "")) - - return names - - def fetch_file_content(filename: str) -> AnyStr: - with open(f"./samples/{filename.lower()}.txt", "r", encoding="utf-8") as file: - text = file.read() - return text - - def join_sentences(sentences: list) -> str: - return " ".join([sentence for sentence in sentences]) - - def split_sentences_by_token_length(sentences: list, split_token_length: int) -> list: - accumulated_lists = [] - result_list = [] - cumulative_token_length = 0 - - for sentence in sentences: - token_list = tokenizer(sentence, max_length=1024, truncation=True) - token_length = len(token_list["input_ids"]) - if token_length > 10: - if token_length + cumulative_token_length > split_token_length and result_list: - accumulated_lists.append(join_sentences(result_list)) - result_list = [sentence] - cumulative_token_length = token_length - else: - result_list.append(sentence) - cumulative_token_length += token_length - if result_list: - accumulated_lists.append(join_sentences(result_list)) - return accumulated_lists - - nltk.download("punkt") - pipe = create_pipeline() - tokenizer = AutoTokenizer.from_pretrained("ML-unipi/bart-large-tos") - - if "target_text" not in st.session_state: - st.session_state.target_text = "" - if "sample_choice" not in st.session_state: - st.session_state.sample_choice = "" - - st.header("Input") - sample_choice = st.selectbox( - label="Select a sample:", - options=get_list_files() - ) - - st.session_state.target_text = fetch_file_content(sample_choice) - target_text_input = st.text_area( - value=st.session_state.target_text, - label="Paste your own Term Of Service:", - height=240 - ) - - summarize_button = st.button(label="Try it!") - - if summarize_button: - if target_text_input != "": - summary_sentences = [] - with st.spinner("Summarizing in progress..."): - sentences = split_sentences_by_token_length(nltk.sent_tokenize(target_text_input, language="english"), - split_token_length=1024 - ) - for sentence in sentences: - output = pipe(sentence) - summary = output[0]["summary_text"] - summary_sentences += clean_summaries(summary) - display_summary(summary_sentences) - - -if __name__ == "__main__": - main() diff --git a/spaces/MWilinski/bot/tests/bot/discord_client/__init__.py b/spaces/MWilinski/bot/tests/bot/discord_client/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Makiing/coolb-in-gtest/next.config.js b/spaces/Makiing/coolb-in-gtest/next.config.js deleted file mode 100644 index 0e6ccd7fbc91d0459eaaff3e968ce0556789c605..0000000000000000000000000000000000000000 --- a/spaces/Makiing/coolb-in-gtest/next.config.js +++ /dev/null @@ -1,38 +0,0 @@ -/** @type {import('next').NextConfig} */ -const nextConfig = { - // output: 'export', - // assetPrefix: '.', - webpack: (config, { isServer }) => { - if (!isServer) { - config.resolve = { - ...config.resolve, - fallback: { - 'bufferutil': false, - 'utf-8-validate': false, - http: false, - https: false, - stream: false, - // fixes proxy-agent dependencies - net: false, - dns: false, - tls: false, - assert: false, - // fixes next-i18next dependencies - path: false, - fs: false, - // fixes mapbox dependencies - events: false, - // fixes sentry dependencies - process: false - } - }; - } - config.module.exprContextCritical = false; - - return config; - }, -} - -module.exports = (...args) => { - return nextConfig -} diff --git a/spaces/MashiroSA/sovits-emu-voice-transform/modules/losses.py b/spaces/MashiroSA/sovits-emu-voice-transform/modules/losses.py deleted file mode 100644 index cd21799eccde350c3aac0bdd661baf96ed220147..0000000000000000000000000000000000000000 --- a/spaces/MashiroSA/sovits-emu-voice-transform/modules/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import modules.commons as commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - #print(logs_p) - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/optimizer/builder.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/optimizer/builder.py deleted file mode 100644 index f9234eed8f1f186d9d8dfda34562157ee39bdb3a..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/optimizer/builder.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import inspect - -import torch - -from ...utils import Registry, build_from_cfg - -OPTIMIZERS = Registry('optimizer') -OPTIMIZER_BUILDERS = Registry('optimizer builder') - - -def register_torch_optimizers(): - torch_optimizers = [] - for module_name in dir(torch.optim): - if module_name.startswith('__'): - continue - _optim = getattr(torch.optim, module_name) - if inspect.isclass(_optim) and issubclass(_optim, - torch.optim.Optimizer): - OPTIMIZERS.register_module()(_optim) - torch_optimizers.append(module_name) - return torch_optimizers - - -TORCH_OPTIMIZERS = register_torch_optimizers() - - -def build_optimizer_constructor(cfg): - return build_from_cfg(cfg, OPTIMIZER_BUILDERS) - - -def build_optimizer(model, cfg): - optimizer_cfg = copy.deepcopy(cfg) - constructor_type = optimizer_cfg.pop('constructor', - 'DefaultOptimizerConstructor') - paramwise_cfg = optimizer_cfg.pop('paramwise_cfg', None) - optim_constructor = build_optimizer_constructor( - dict( - type=constructor_type, - optimizer_cfg=optimizer_cfg, - paramwise_cfg=paramwise_cfg)) - optimizer = optim_constructor(model) - return optimizer diff --git a/spaces/MetaWabbit/Auto-GPT/tests/__init__.py b/spaces/MetaWabbit/Auto-GPT/tests/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/MirageML/sjc/run_img_sampling.py b/spaces/MirageML/sjc/run_img_sampling.py deleted file mode 100644 index bded1a0a2eb1b5530c590ae55c8d10c54720253b..0000000000000000000000000000000000000000 --- a/spaces/MirageML/sjc/run_img_sampling.py +++ /dev/null @@ -1,235 +0,0 @@ -from pathlib import Path -import numpy as np -import torch - -from misc import torch_samps_to_imgs -from adapt import Karras, ScoreAdapter, power_schedule -from adapt_gddpm import GuidedDDPM -from adapt_ncsn import NCSN as _NCSN -# from adapt_vesde import VESDE # not included to prevent import conflicts -from adapt_sd import StableDiffusion - -from my.utils import tqdm, EventStorage, HeartBeat, EarlyLoopBreak -from my.config import BaseConf, dispatch -from my.utils.seed import seed_everything - - -class GDDPM(BaseConf): - """Guided DDPM from OpenAI""" - model: str = "m_lsun_256" - lsun_cat: str = "bedroom" - imgnet_cat: int = -1 - - def make(self): - args = self.dict() - model = GuidedDDPM(**args) - return model - - -class SD(BaseConf): - """Stable Diffusion""" - variant: str = "v1" - v2_highres: bool = False - prompt: str = "a photograph of an astronaut riding a horse" - scale: float = 3.0 # classifier free guidance scale - precision: str = 'autocast' - - def make(self): - args = self.dict() - model = StableDiffusion(**args) - return model - - -class SDE(BaseConf): - def make(self): - args = self.dict() - model = VESDE(**args) - return model - - -class NCSN(BaseConf): - def make(self): - args = self.dict() - model = _NCSN(**args) - return model - - -class KarrasGen(BaseConf): - family: str = "gddpm" - gddpm: GDDPM = GDDPM() - sd: SD = SD() - # sde: SDE = SDE() - ncsn: NCSN = NCSN() - - batch_size: int = 10 - num_images: int = 1250 - num_t: int = 40 - σ_max: float = 80.0 - heun: bool = True - langevin: bool = False - cls_scaling: float = 1.0 # classifier guidance scaling - - def run(self): - args = self.dict() - family = args.pop("family") - model = getattr(self, family).make() - self.karras_generate(model, **args) - - @staticmethod - def karras_generate( - model: ScoreAdapter, - batch_size, num_images, σ_max, num_t, langevin, heun, cls_scaling, - **kwargs - ): - del kwargs # removed extra args - num_batches = num_images // batch_size - - fuse = EarlyLoopBreak(5) - with tqdm(total=num_batches) as pbar, \ - HeartBeat(pbar) as hbeat, \ - EventStorage() as metric: - - all_imgs = [] - - for _ in range(num_batches): - if fuse.on_break(): - break - - pipeline = Karras.inference( - model, batch_size, num_t, - init_xs=None, heun=heun, σ_max=σ_max, - langevin=langevin, cls_scaling=cls_scaling - ) - - for imgs in tqdm(pipeline, total=num_t+1, disable=False): - # _std = imgs.std().item() - # print(_std) - hbeat.beat() - pass - - if isinstance(model, StableDiffusion): - imgs = model.decode(imgs) - - imgs = torch_samps_to_imgs(imgs, uncenter=model.samps_centered()) - all_imgs.append(imgs) - - pbar.update() - - all_imgs = np.concatenate(all_imgs, axis=0) - metric.put_artifact("imgs", ".npy", lambda fn: np.save(fn, all_imgs)) - metric.step() - hbeat.done() - - -class SMLDGen(BaseConf): - family: str = "ncsn" - gddpm: GDDPM = GDDPM() - # sde: SDE = SDE() - ncsn: NCSN = NCSN() - - batch_size: int = 16 - num_images: int = 16 - num_stages: int = 80 - num_steps: int = 15 - σ_max: float = 80.0 - ε: float = 1e-5 - - def run(self): - args = self.dict() - family = args.pop("family") - model = getattr(self, family).make() - self.smld_generate(model, **args) - - @staticmethod - def smld_generate( - model: ScoreAdapter, - batch_size, num_images, num_stages, num_steps, σ_max, ε, - **kwargs - ): - num_batches = num_images // batch_size - σs = power_schedule(σ_max, model.σ_min, num_stages) - σs = [model.snap_t_to_nearest_tick(σ)[0] for σ in σs] - - fuse = EarlyLoopBreak(5) - with tqdm(total=num_batches) as pbar, \ - HeartBeat(pbar) as hbeat, \ - EventStorage() as metric: - - all_imgs = [] - - for _ in range(num_batches): - if fuse.on_break(): - break - - init_xs = torch.rand(batch_size, *model.data_shape(), device=model.device) - if model.samps_centered(): - init_xs = init_xs * 2 - 1 # [0, 1] -> [-1, 1] - - pipeline = smld_inference( - model, σs, num_steps, ε, init_xs - ) - - for imgs in tqdm(pipeline, total=(num_stages * num_steps)+1, disable=False): - pbar.set_description(f"{imgs.max().item():.3f}") - metric.put_scalars( - max=imgs.max().item(), min=imgs.min().item(), std=imgs.std().item() - ) - metric.step() - hbeat.beat() - - pbar.update() - imgs = torch_samps_to_imgs(imgs, uncenter=model.samps_centered()) - all_imgs.append(imgs) - - all_imgs = np.concatenate(all_imgs, axis=0) - metric.put_artifact("imgs", ".npy", lambda fn: np.save(fn, all_imgs)) - metric.step() - hbeat.done() - - -def smld_inference(model, σs, num_steps, ε, init_xs): - from math import sqrt - # not doing conditioning or cls guidance; for gddpm only lsun works; fine. - - xs = init_xs - yield xs - - for i in range(len(σs)): - α_i = ε * ((σs[i] / σs[-1]) ** 2) - for _ in range(num_steps): - grad = model.score(xs, σs[i]) - z = torch.randn_like(xs) - xs = xs + α_i * grad + sqrt(2 * α_i) * z - yield xs - - -def load_np_imgs(fname): - fname = Path(fname) - data = np.load(fname) - if fname.suffix == ".npz": - imgs = data['arr_0'] - else: - imgs = data - return imgs - - -def visualize(max_n_imgs=16): - import torchvision.utils as vutils - from imageio import imwrite - from einops import rearrange - - all_imgs = load_np_imgs("imgs/step_0.npy") - - imgs = all_imgs[:max_n_imgs] - imgs = rearrange(imgs, "N H W C -> N C H W", C=3) - imgs = torch.from_numpy(imgs) - pane = vutils.make_grid(imgs, padding=2, nrow=4) - pane = rearrange(pane, "C H W -> H W C", C=3) - pane = pane.numpy() - imwrite("preview.jpg", pane) - - -if __name__ == "__main__": - seed_everything(0) - dispatch(KarrasGen) - visualize(16) diff --git a/spaces/Monosmarinos/Pix2Pix-Video/style.css b/spaces/Monosmarinos/Pix2Pix-Video/style.css deleted file mode 100644 index 3cf565d3e03852436a405cf632d1d22433bb4087..0000000000000000000000000000000000000000 --- a/spaces/Monosmarinos/Pix2Pix-Video/style.css +++ /dev/null @@ -1,101 +0,0 @@ -#col-container {max-width: 820px; margin-left: auto; margin-right: auto;} -#duplicate-container{ - display: flex; - justify-content: space-between; - align-items: center; - line-height: 1em; - flex-direction: row-reverse; - font-size:1em; -} -a, a:hover, a:visited { - text-decoration-line: underline; - font-weight: 600; - color: #1f2937 !important; -} - -.dark a, .dark a:hover, .dark a:visited { - color: #f3f4f6 !important; -} - -.footer { - margin-bottom: 45px; - margin-top: 10px; - text-align: center; - border-bottom: 1px solid #e5e5e5; -} - -.footer>p { - font-size: .8rem!important; - display: inline-block; - padding: 0 10px; - transform: translateY(26px); - background: white; -} -.dark .footer { - border-color: #303030; -} -.dark .footer>p { - background: #0b0f19; -} - -div#may-like-container > p { - font-size: .8em; - margin-bottom: 4px; -} - -.animate-spin { - animation: spin 1s linear infinite; -} - -@keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } -} - -#share-btn-container { - display: flex; - padding-left: 0.5rem !important; - padding-right: 0.5rem !important; - background-color: #000000; - justify-content: center; - align-items: center; - border-radius: 9999px !important; - max-width: 13rem; -} - -#share-btn-container:hover { - background-color: #060606; -} - -#share-btn { - all: initial; - color: #ffffff; - font-weight: 600; - cursor:pointer; - font-family: 'IBM Plex Sans', sans-serif; - margin-left: 0.5rem !important; - padding-top: 0.5rem !important; - padding-bottom: 0.5rem !important; - right:0; -} - -#share-btn * { - all: unset; -} - -#share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; -} - -#share-btn-container .wrap { - display: none !important; -} - -#share-btn-container.hidden { - display: none!important; -} \ No newline at end of file diff --git a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textrecog/data_migrator.py b/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textrecog/data_migrator.py deleted file mode 100644 index 9fb0f205b67a4d55bb1208feba4e4db65c0b78e8..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textrecog/data_migrator.py +++ /dev/null @@ -1,98 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import json -from typing import List, Tuple - -from mmocr.datasets import RecogLMDBDataset -from mmocr.utils import StringStripper, dump_ocr_data, recog_anno_to_imginfo - - -def parse_legacy_data(in_path: str, - format: str) -> Tuple[List[str], List[str]]: - """Load legacy data and return a list of file paths and labels. - - Args: - in_path (str): Path to annotation file. - format (str): Annotation format. Choices are 'txt', 'json' and 'lmdb'. - For 'lmdb' format, the lmdb file should only contains labels. For - lmdb file with labels and images, the conversion is unnecessary. - Returns: - tuple(list[str], list[str]): File paths and labels. - """ - file_paths = [] - labels = [] - strip_cls = StringStripper() - if format == 'lmdb': - dataset = RecogLMDBDataset( - in_path, - parser_cfg=dict(type='LineJsonParser', keys=['filename', 'text'])) - for data_info in dataset: - file_path = data_info['img_path'] - label = data_info['instances'][0]['text'] - file_path = strip_cls(file_path) - label = strip_cls(label) - # MJ's file_path starts with './' - if file_path.startswith('./'): - file_path = file_path[2:] - - file_paths.append(file_path) - labels.append(label) - return file_paths, labels - else: - with open(in_path) as f: - if format == 'txt': - for line in f: - line = strip_cls(line) - file_path, label = line.split()[:2] - # MJ's file_path starts with './' - if file_path.startswith('./'): - file_path = file_path[2:] - - file_paths.append(file_path) - labels.append(label) - elif format == 'jsonl': - for line in f: - datum = json.loads(line) - file_path = datum['filename'] - # MJ's file_path starts with './' - if file_path.startswith('./'): - file_path = file_path[2:] - - file_paths.append(file_path) - labels.append(datum['text']) - - return file_paths, labels - - -def parse_args(): - """Parse input arguments.""" - parser = argparse.ArgumentParser( - description='Convert annotations for' - 'text recognition tasks in MMOCR 0.x into the latest openmmlab format.' - ) - parser.add_argument( - 'in_path', help='The path to legacy recognition data file') - parser.add_argument( - 'out_path', help='The output json path in openmmlab format') - parser.add_argument( - '--format', - choices=['txt', 'jsonl', 'lmdb'], - type=str, - default='txt', - help='Legacy data format') - args = parser.parse_args() - if args.out_path.split('.')[-1] != 'json': - raise ValueError('The output path must be a json file.') - return args - - -def main(): - args = parse_args() - file_paths, labels = parse_legacy_data(args.in_path, args.format) - img_infos = recog_anno_to_imginfo(file_paths, labels) - dump_ocr_data(img_infos, args.out_path, 'textrecog') - print('finish') - - -if __name__ == '__main__': - main() diff --git a/spaces/NCTCMumbai/NCTC/models/research/adversarial_logit_pairing/datasets/imagenet_input.py b/spaces/NCTCMumbai/NCTC/models/research/adversarial_logit_pairing/datasets/imagenet_input.py deleted file mode 100644 index 0b210b8ce11f3dbf1f14482b1b4f3a95da02a48a..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/adversarial_logit_pairing/datasets/imagenet_input.py +++ /dev/null @@ -1,255 +0,0 @@ -# Copyright 2018 Google Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -"""Imagenet input.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import os -from absl import flags -import tensorflow as tf - -FLAGS = flags.FLAGS - - -flags.DEFINE_string('imagenet_data_dir', None, - 'Directory with Imagenet dataset in TFRecord format.') - - -def _decode_and_random_crop(image_buffer, bbox, image_size): - """Randomly crops image and then scales to target size.""" - with tf.name_scope('distorted_bounding_box_crop', - values=[image_buffer, bbox]): - sample_distorted_bounding_box = tf.image.sample_distorted_bounding_box( - tf.image.extract_jpeg_shape(image_buffer), - bounding_boxes=bbox, - min_object_covered=0.1, - aspect_ratio_range=[0.75, 1.33], - area_range=[0.08, 1.0], - max_attempts=10, - use_image_if_no_bounding_boxes=True) - bbox_begin, bbox_size, _ = sample_distorted_bounding_box - - # Crop the image to the specified bounding box. - offset_y, offset_x, _ = tf.unstack(bbox_begin) - target_height, target_width, _ = tf.unstack(bbox_size) - crop_window = tf.stack([offset_y, offset_x, target_height, target_width]) - image = tf.image.decode_and_crop_jpeg(image_buffer, crop_window, channels=3) - image = tf.image.convert_image_dtype( - image, dtype=tf.float32) - - image = tf.image.resize_bicubic([image], - [image_size, image_size])[0] - - return image - - -def _decode_and_center_crop(image_buffer, image_size): - """Crops to center of image with padding then scales to target size.""" - shape = tf.image.extract_jpeg_shape(image_buffer) - image_height = shape[0] - image_width = shape[1] - - padded_center_crop_size = tf.cast( - 0.875 * tf.cast(tf.minimum(image_height, image_width), tf.float32), - tf.int32) - - offset_height = ((image_height - padded_center_crop_size) + 1) // 2 - offset_width = ((image_width - padded_center_crop_size) + 1) // 2 - crop_window = tf.stack([offset_height, offset_width, - padded_center_crop_size, padded_center_crop_size]) - image = tf.image.decode_and_crop_jpeg(image_buffer, crop_window, channels=3) - image = tf.image.convert_image_dtype( - image, dtype=tf.float32) - - image = tf.image.resize_bicubic([image], - [image_size, image_size])[0] - - return image - - -def _normalize(image): - """Rescale image to [-1, 1] range.""" - return tf.multiply(tf.subtract(image, 0.5), 2.0) - - -def image_preprocessing(image_buffer, bbox, image_size, is_training): - """Does image decoding and preprocessing. - - Args: - image_buffer: string tensor with encoded image. - bbox: bounding box of the object at the image. - image_size: image size. - is_training: whether to do training or eval preprocessing. - - Returns: - Tensor with the image. - """ - if is_training: - image = _decode_and_random_crop(image_buffer, bbox, image_size) - image = _normalize(image) - image = tf.image.random_flip_left_right(image) - else: - image = _decode_and_center_crop(image_buffer, image_size) - image = _normalize(image) - image = tf.reshape(image, [image_size, image_size, 3]) - return image - - -def imagenet_parser(value, image_size, is_training): - """Parse an ImageNet record from a serialized string Tensor. - - Args: - value: encoded example. - image_size: size of the output image. - is_training: if True then do training preprocessing, - otherwise do eval preprocessing. - - Returns: - image: tensor with the image. - label: true label of the image. - """ - keys_to_features = { - 'image/encoded': - tf.FixedLenFeature((), tf.string, ''), - 'image/format': - tf.FixedLenFeature((), tf.string, 'jpeg'), - 'image/class/label': - tf.FixedLenFeature([], tf.int64, -1), - 'image/class/text': - tf.FixedLenFeature([], tf.string, ''), - 'image/object/bbox/xmin': - tf.VarLenFeature(dtype=tf.float32), - 'image/object/bbox/ymin': - tf.VarLenFeature(dtype=tf.float32), - 'image/object/bbox/xmax': - tf.VarLenFeature(dtype=tf.float32), - 'image/object/bbox/ymax': - tf.VarLenFeature(dtype=tf.float32), - 'image/object/class/label': - tf.VarLenFeature(dtype=tf.int64), - } - - parsed = tf.parse_single_example(value, keys_to_features) - - image_buffer = tf.reshape(parsed['image/encoded'], shape=[]) - - xmin = tf.expand_dims(parsed['image/object/bbox/xmin'].values, 0) - ymin = tf.expand_dims(parsed['image/object/bbox/ymin'].values, 0) - xmax = tf.expand_dims(parsed['image/object/bbox/xmax'].values, 0) - ymax = tf.expand_dims(parsed['image/object/bbox/ymax'].values, 0) - # Note that ordering is (y, x) - bbox = tf.concat([ymin, xmin, ymax, xmax], 0) - # Force the variable number of bounding boxes into the shape - # [1, num_boxes, coords]. - bbox = tf.expand_dims(bbox, 0) - bbox = tf.transpose(bbox, [0, 2, 1]) - - image = image_preprocessing( - image_buffer=image_buffer, - bbox=bbox, - image_size=image_size, - is_training=is_training - ) - - # Labels are in [1, 1000] range - label = tf.cast( - tf.reshape(parsed['image/class/label'], shape=[]), dtype=tf.int32) - - return image, label - - -def imagenet_input(split, batch_size, image_size, is_training): - """Returns ImageNet dataset. - - Args: - split: name of the split, "train" or "validation". - batch_size: size of the minibatch. - image_size: size of the one side of the image. Output images will be - resized to square shape image_size*image_size. - is_training: if True then training preprocessing is done, otherwise eval - preprocessing is done. - - Raises: - ValueError: if name of the split is incorrect. - - Returns: - Instance of tf.data.Dataset with the dataset. - """ - if split.lower().startswith('train'): - file_pattern = os.path.join(FLAGS.imagenet_data_dir, 'train-*') - elif split.lower().startswith('validation'): - file_pattern = os.path.join(FLAGS.imagenet_data_dir, 'validation-*') - else: - raise ValueError('Invalid split: %s' % split) - - dataset = tf.data.Dataset.list_files(file_pattern, shuffle=is_training) - - if is_training: - dataset = dataset.repeat() - - def fetch_dataset(filename): - return tf.data.TFRecordDataset(filename, buffer_size=8*1024*1024) - - # Read the data from disk in parallel - dataset = dataset.apply( - tf.data.experimental.parallel_interleave( - fetch_dataset, cycle_length=4, sloppy=True)) - dataset = dataset.shuffle(1024) - - # Parse, preprocess, and batch the data in parallel - dataset = dataset.apply( - tf.data.experimental.map_and_batch( - lambda value: imagenet_parser(value, image_size, is_training), - batch_size=batch_size, - num_parallel_batches=4, - drop_remainder=True)) - - def set_shapes(images, labels): - """Statically set the batch_size dimension.""" - images.set_shape(images.get_shape().merge_with( - tf.TensorShape([batch_size, None, None, None]))) - labels.set_shape(labels.get_shape().merge_with( - tf.TensorShape([batch_size]))) - return images, labels - - # Assign static batch size dimension - dataset = dataset.map(set_shapes) - - # Prefetch overlaps in-feed with training - dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE) - return dataset - - -def num_examples_per_epoch(split): - """Returns the number of examples in the data set. - - Args: - split: name of the split, "train" or "validation". - - Raises: - ValueError: if split name is incorrect. - - Returns: - Number of example in the split. - """ - if split.lower().startswith('train'): - return 1281167 - elif split.lower().startswith('validation'): - return 50000 - else: - raise ValueError('Invalid split: %s' % split) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/speech_to_text/modules/augmented_memory_attention.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/speech_to_text/modules/augmented_memory_attention.py deleted file mode 100644 index e7465bc889fd1ba6ca2c60905a2eb6ff5cc62b9d..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/speech_to_text/modules/augmented_memory_attention.py +++ /dev/null @@ -1,488 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Tuple, List - -import torch -import torch.nn.functional as F -from fairseq.models import FairseqEncoder -from fairseq.models.speech_to_text import ( - ConvTransformerEncoder, -) -from fairseq.models.speech_to_text.utils import attention_suppression -from fairseq.models.speech_to_text.utils import ( - lengths_to_encoder_padding_mask, - segments_to_sequence, - sequence_to_segments, -) -from fairseq.modules import MultiheadAttention, TransformerEncoderLayer -from torch import nn, Tensor - -# ------------------------------------------------------------------------------ -# AugmentedMemoryConvTransformerEncoder -# ------------------------------------------------------------------------------ - - -class AugmentedMemoryConvTransformerEncoder(ConvTransformerEncoder): - def __init__(self, args): - super().__init__(args) - - args.encoder_stride = self.stride() - - self.left_context = args.left_context // args.encoder_stride - - self.right_context = args.right_context // args.encoder_stride - - self.left_context_after_stride = args.left_context // args.encoder_stride - self.right_context_after_stride = args.right_context // args.encoder_stride - - self.transformer_layers = nn.ModuleList([]) - self.transformer_layers.extend( - [ - AugmentedMemoryTransformerEncoderLayer(args) - for i in range(args.encoder_layers) - ] - ) - - def stride(self): - # Hard coded here. Should infer from convs in future - stride = 4 - return stride - - def forward(self, src_tokens, src_lengths, states=None): - """Encode input sequence. - :param torch.Tensor xs: input tensor - :param torch.Tensor masks: input mask - :return: position embedded tensor and mask - :rtype Tuple[torch.Tensor, torch.Tensor]: - """ - bsz, max_seq_len, _ = src_tokens.size() - x = ( - src_tokens.view(bsz, max_seq_len, self.in_channels, self.input_dim) - .transpose(1, 2) - .contiguous() - ) - x = self.conv(x) - bsz, _, output_seq_len, _ = x.size() - x = x.transpose(1, 2).transpose(0, 1).contiguous().view(output_seq_len, bsz, -1) - x = self.out(x) - x = self.embed_scale * x - - subsampling_factor = 1.0 * max_seq_len / output_seq_len - input_lengths = torch.max( - (src_lengths.float() / subsampling_factor).ceil().long(), - x.size(0) * src_lengths.new_ones([src_lengths.size(0)]).long(), - ) - - encoder_padding_mask, _ = lengths_to_encoder_padding_mask( - input_lengths, batch_first=True - ) - - # TODO: fix positional embedding - positions = self.embed_positions(encoder_padding_mask).transpose(0, 1) - - x += positions - x = F.dropout(x, p=self.dropout, training=self.training) - - # State to store memory banks etc. - if states is None: - states = [ - {"memory_banks": None, "encoder_states": None} - for i in range(len(self.transformer_layers)) - ] - - for i, layer in enumerate(self.transformer_layers): - # x size: - # (self.left_size + self.segment_size + self.right_size) - # / self.stride, num_heads, dim - # TODO: Consider mask here - x = layer(x, states[i]) - states[i]["encoder_states"] = x[ - self.left_context_after_stride : -self.right_context_after_stride - ] - - lengths = ( - ( - ~encoder_padding_mask[ - :, self.left_context_after_stride : -self.right_context_after_stride - ] - ) - .sum(dim=1, keepdim=True) - .long() - ) - - return states[-1]["encoder_states"], lengths, states - - -# ------------------------------------------------------------------------------ -# AugmentedMemoryTransformerEncoderLayer -# ------------------------------------------------------------------------------ -class AugmentedMemoryTransformerEncoderLayer(TransformerEncoderLayer): - def __init__(self, args): - super().__init__(args) - - self.left_context = args.left_context // args.encoder_stride - self.right_context = args.right_context // args.encoder_stride - - def forward(self, x, state): - - length, batch_size, x_dim = x.size() - - residual = x - - if self.normalize_before: - x = self.self_attn_layer_norm(x) - - # init_state - if state.get("memory_banks", None) is None: - state["memory_banks"] = [] - - # TODO reseach new sum_query method - seg_start = self.left_context - seg_end = length - self.right_context - if seg_start < seg_end: - summarization_query = torch.mean(x[seg_start:seg_end], keepdim=True, dim=0) - else: - summarization_query = x.new_zeros(1, batch_size, x_dim) - - x = torch.cat([x, summarization_query], dim=0) - - x = self.self_attn(input_and_summary=x, state=state) - - x = self.dropout_module(x) - x = residual + x - - if not self.normalize_before: - x = self.self_attn_layer_norm(x) - - residual = x - if self.normalize_before: - x = self.final_layer_norm(x) - - x = self.activation_fn(self.fc1(x)) - x = self.activation_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = residual + x - if not self.normalize_before: - x = self.final_layer_norm(x) - - return x - - def build_self_attention(self, embed_dim, args): - return AugmentedMemoryMultiheadAttention( - embed_dim=embed_dim, - num_heads=args.encoder_attention_heads, - dropout=args.attention_dropout, - self_attention=True, - q_noise=self.quant_noise, - qn_block_size=self.quant_noise_block_size, - tanh_on_mem=True, - max_memory_size=args.max_memory_size, - ) - - -# ------------------------------------------------------------------------------ -# AugmentedMemoryMultiheadAttention -# ------------------------------------------------------------------------------ -class AugmentedMemoryMultiheadAttention(MultiheadAttention): - """ - Augmented Memory Attention from - Streaming Transformer-based Acoustic Models - Using Self-attention with Augmented Memory - https://arxiv.org/abs/2005.08042 - """ - - def __init__( - self, - embed_dim, - num_heads, - kdim=None, - vdim=None, - dropout=0.0, - bias=True, - add_bias_kv=False, - add_zero_attn=False, - self_attention=False, - encoder_decoder_attention=False, - q_noise=0.0, - qn_block_size=8, - tanh_on_mem=False, - memory_dim=None, - std_scale=0.5, # 0.5 based on https://arxiv.org/abs/2005.09137 - max_memory_size=-1, - disable_mem_on_mem_attn=True, - ): - super().__init__( - embed_dim, - num_heads, - kdim, - vdim, - dropout, - bias, - add_bias_kv, - add_zero_attn, - self_attention, - encoder_decoder_attention, - q_noise, - qn_block_size, - ) - - self.memory_dim = memory_dim if memory_dim is not None else embed_dim - self.std_scale = std_scale - self.disable_mem_on_mem_attn = disable_mem_on_mem_attn - - # This Operator was used for factorization in PySpeech - self.v2e = lambda x: x - - if tanh_on_mem: - self.squash_mem = torch.tanh - self.nonlinear_squash_mem = True - else: - self.squash_mem = lambda x: x - self.nonlinear_squash_mem = False - - self.max_memory_size = max_memory_size - - def forward(self, input_and_summary, state): - """ - input: Encoder states of current segment with left or right context, - plus one summarization query - - """ - - length, batch_size, _ = input_and_summary.shape - length = length - 1 # not include sum_query, last index - - memory = state["memory_banks"] - # TODO: positional embedding on memory - - if self.max_memory_size > -1 and len(memory) > self.max_memory_size: - # TODO: need to fix here - if self.max_memory_size == 0: - memory = memory.new_zeros(1, memory.size(1), self.memory_dim) - else: - memory = memory[-self.max_memory_size :] - - memory_and_input = torch.cat(memory + [input_and_summary[:-1]], dim=0) - input_and_sum_query = input_and_summary - - q = self.q_proj(self.v2e(input_and_sum_query)) - k = self.k_proj(self.v2e(memory_and_input)) - v = self.v_proj(self.v2e(memory_and_input)) - - q = ( - q.contiguous() - .view(-1, batch_size * self.num_heads, self.head_dim) - .transpose(0, 1) - * self.scaling - ) - k = ( - k.contiguous() - .view(-1, batch_size * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - - v = ( - v.contiguous() - .view(-1, batch_size * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - - attention_weights = torch.bmm(q, k.transpose(1, 2)) - - if self.disable_mem_on_mem_attn: - attention_weights = self.suppress_mem_on_mem_attention( - batch_size, self.num_heads, len(memory), attention_weights - ) - - if self.std_scale is not None: - attention_weights = attention_suppression(attention_weights, self.std_scale) - - assert list(attention_weights.shape) == [ - batch_size * self.num_heads, - length + 1, - length + len(memory), - ] - - attention_weights = torch.nn.functional.softmax( - attention_weights.float(), dim=-1 - ).type_as(attention_weights) - - attention_probs = self.dropout_module(attention_weights) - - # [T, T, B, n_head] + [T, B, n_head, d_head] -> [T, B, n_head, d_head] - attention = torch.bmm(attention_probs, v) - - assert list(attention.shape) == [ - batch_size * self.num_heads, - length + 1, - self.head_dim, - ] - - attention = ( - attention.transpose(0, 1) - .contiguous() - .view(length + 1, batch_size, self.embed_dim) - ) - - output_and_memory = self.out_proj(attention) - - next_m = output_and_memory[-1:] - next_m = self.squash_mem(next_m) - output = output_and_memory[:-1] - - state["memory_banks"].append(next_m) - - return output - - def suppress_mem_on_mem_attention( - self, B: int, num_heads: int, mem_size: int, attention_weight: Tensor - ): - """ - Arguments: - - B: batch size - - num_heads: number of attention heads - - mem_size: size of memory bank - - attention_weight: a [B*num_heads, T + 1, T + mem_size] vector - - Return: - modified attention_weight with [B*num_heads, -1, :mem_size] = -inf - """ - attention_weight[:, -1, :mem_size] = float("-inf") - return attention_weight - - -# ------------------------------------------------------------------------------ -# SequenceEncoder -# ------------------------------------------------------------------------------ -class SequenceEncoder(FairseqEncoder): - """ - SequenceEncoder encodes sequences. - - More specifically, `src_tokens` and `src_lengths` in `forward()` should - describe a batch of "complete" sequences rather than segments. - - Segment-by-segment inference can be triggered by `segment_size`: - 1) `segment_size` is None: - SequenceEncoder treats the input sequence as one single segment. - 2) `segment_size` is not None (some int instead): - SequenceEncoder does the following: - 1. breaks the input sequence into several segments - 2. inference on each segment and collect the outputs - 3. concatanete segment outputs into the output sequence. - Note that `segment_size` here shouldn't include additional left/right - contexts needed, for example if we wish to infer with LC-BLSTM where the - middle chunk size is 100 and right context is 20, `segment_size` should be - 100. - """ - - def __init__(self, args, module): - super().__init__(None) - - self.module = module - self.input_time_axis = 1 - self.output_time_axis = 0 - self.segment_size = args.segment_size - self.left_context = args.left_context - self.right_context = args.right_context - - def forward( - self, - src_tokens: Tensor, - src_lengths: Tensor, - states=None, - ): - - seg_src_tokens_lengths = sequence_to_segments( - sequence=src_tokens, - time_axis=self.input_time_axis, - lengths=src_lengths, - segment_size=self.segment_size, - extra_left_context=self.left_context, - extra_right_context=self.right_context, - ) - - seg_encoder_states_lengths: List[Tuple[Tensor, Tensor]] = [] - - for seg_src_tokens, seg_src_lengths in seg_src_tokens_lengths: - (seg_encoder_states, seg_enc_lengths, states) = self.module( - seg_src_tokens, - seg_src_lengths, - states=states, - ) - - seg_encoder_states_lengths.append((seg_encoder_states, seg_enc_lengths)) - - encoder_out, enc_lengths = segments_to_sequence( - segments=seg_encoder_states_lengths, time_axis=self.output_time_axis - ) - - encoder_padding_mask, _ = lengths_to_encoder_padding_mask( - enc_lengths, batch_first=True - ) - - if not encoder_padding_mask.any(): - encoder_padding_mask = None - - return { - "encoder_out": [encoder_out], - "encoder_padding_mask": [encoder_padding_mask], - "encoder_embedding": [], - "encoder_states": [states], - "src_tokens": [], - "src_lengths": [], - } - - def incremental_encode( - self, - seg_src_tokens: Tensor, - seg_src_lengths: Tensor, - states=None, - ): - """ - Different from forward function, this function takes segmented speech - as input, and append encoder states to previous states - """ - (seg_encoder_states, seg_enc_lengths, states) = self.module( - seg_src_tokens, - seg_src_lengths, - states=states, - ) - return seg_encoder_states, seg_enc_lengths, states - - -# ------------------------------------------------------------------------------ -# Augmented memory model decorator -# ------------------------------------------------------------------------------ -def augmented_memory(klass): - class StreamSeq2SeqModel(klass): - @staticmethod - def add_args(parser): - super(StreamSeq2SeqModel, StreamSeq2SeqModel).add_args(parser) - parser.add_argument( - "--segment-size", type=int, required=True, help="Length of the segment." - ) - parser.add_argument( - "--left-context", - type=int, - default=0, - help="Left context for the segment.", - ) - parser.add_argument( - "--right-context", - type=int, - default=0, - help="Right context for the segment.", - ) - parser.add_argument( - "--max-memory-size", - type=int, - default=-1, - help="Right context for the segment.", - ) - - StreamSeq2SeqModel.__name__ = klass.__name__ - return StreamSeq2SeqModel diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_to_text/README.md b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_to_text/README.md deleted file mode 100644 index f639d300d342f8de1392c98bfc44ec8690188539..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_to_text/README.md +++ /dev/null @@ -1,77 +0,0 @@ -# Speech-to-Text (S2T) Modeling - -[https://www.aclweb.org/anthology/2020.aacl-demo.6](https://www.aclweb.org/anthology/2020.aacl-demo.6.pdf) - -Speech recognition (ASR) and speech-to-text translation (ST) with fairseq. - -## Data Preparation -S2T modeling data consists of source speech features, target text and other optional information -(source text, speaker id, etc.). Fairseq S2T uses per-dataset-split TSV manifest files -to store these information. Each data field is represented by a column in the TSV file. - -Unlike text token embeddings, speech features (e.g. log mel-scale filter banks) are usually fixed -during model training and can be pre-computed. The manifest file contains the path to -either the feature file in NumPy format or the WAV/FLAC audio file. For the latter, -features will be extracted on-the-fly by fairseq S2T. Optionally, feature/audio files can be packed -into uncompressed ZIP files (then accessed via byte offset and length) to improve I/O performance. - -Fairseq S2T also employs a YAML file for data related configurations: tokenizer type and dictionary path -for the target text, feature transforms such as CMVN (cepstral mean and variance normalization) and SpecAugment, -temperature-based resampling, etc. - -## Model Training -Fairseq S2T uses the unified `fairseq-train` interface for model training. It requires arguments `--task speech_to_text`, - `--arch ` and `--config-yaml `. - -## Inference & Evaluation -Fairseq S2T uses the unified `fairseq-generate`/`fairseq-interactive` interface for inference and evaluation. It -requires arguments `--task speech_to_text` and `--config-yaml `. The interactive console takes -audio paths (one per line) as inputs. - - -## Examples -- [Speech Recognition (ASR) on LibriSpeech](docs/librispeech_example.md) - -- [Speech-to-Text Translation (ST) on MuST-C](docs/mustc_example.md) - -- [Speech-to-Text Translation (ST) on CoVoST 2](docs/covost_example.md) - -- [Speech-to-Text Translation (ST) on Multilingual TEDx](docs/mtedx_example.md) -- [Simultaneous Speech-to-Text Translation (SimulST) on MuST-C](docs/simulst_mustc_example.md) - -## Updates -- 02/04/2021: Added interactive decoding (`fairseq-interactive`) support. Examples: - [ASR (LibriSpeech)](docs/librispeech_example.md#interactive-decoding) - and [ST (CoVoST 2)](docs/covost_example.md#interactive-decoding). -- 01/08/2021: Several fixes for S2T Transformer model, inference-time de-tokenization, scorer configuration and data - preparation scripts. We also add pre-trained models to the examples and revise the instructions. - Breaking changes: the data preparation scripts now extract filterbank features without CMVN. CMVN is instead applied - on-the-fly (defined in the config YAML). - -## What's Next -- We are migrating the old fairseq [ASR example](../speech_recognition) into this S2T framework and - merging the features from both sides. -- The following papers also base their experiments on fairseq S2T. We are adding more examples for replication. - - [Improving Cross-Lingual Transfer Learning for End-to-End Speech Recognition with Speech Translation (Wang et al., 2020)](https://arxiv.org/abs/2006.05474) - - [Self-Supervised Representations Improve End-to-End Speech Translation (Wu et al., 2020)](https://arxiv.org/abs/2006.12124) - - [Self-Training for End-to-End Speech Translation (Pino et al., 2020)](https://arxiv.org/abs/2006.02490) - - [CoVoST: A Diverse Multilingual Speech-To-Text Translation Corpus (Wang et al., 2020)](https://arxiv.org/abs/2002.01320) - - [Harnessing Indirect Training Data for End-to-End Automatic Speech Translation: Tricks of the Trade (Pino et al., 2019)](https://arxiv.org/abs/1909.06515) - -## Citation -Please cite as: -``` -@inproceedings{wang2020fairseqs2t, - title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq}, - author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino}, - booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations}, - year = {2020}, -} - -@inproceedings{ott2019fairseq, - title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling}, - author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli}, - booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations}, - year = {2019}, -} -``` diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/translation/prepare-wmt14en2fr.sh b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/translation/prepare-wmt14en2fr.sh deleted file mode 100644 index 2ac97a5b76fab255449493488ed8bd67350a7bac..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/translation/prepare-wmt14en2fr.sh +++ /dev/null @@ -1,136 +0,0 @@ -#!/bin/bash -# Adapted from https://github.com/facebookresearch/MIXER/blob/master/prepareData.sh - -echo 'Cloning Moses github repository (for tokenization scripts)...' -git clone https://github.com/moses-smt/mosesdecoder.git - -echo 'Cloning Subword NMT repository (for BPE pre-processing)...' -git clone https://github.com/rsennrich/subword-nmt.git - -SCRIPTS=mosesdecoder/scripts -TOKENIZER=$SCRIPTS/tokenizer/tokenizer.perl -CLEAN=$SCRIPTS/training/clean-corpus-n.perl -NORM_PUNC=$SCRIPTS/tokenizer/normalize-punctuation.perl -REM_NON_PRINT_CHAR=$SCRIPTS/tokenizer/remove-non-printing-char.perl -BPEROOT=subword-nmt/subword_nmt -BPE_TOKENS=40000 - -URLS=( - "http://statmt.org/wmt13/training-parallel-europarl-v7.tgz" - "http://statmt.org/wmt13/training-parallel-commoncrawl.tgz" - "http://statmt.org/wmt13/training-parallel-un.tgz" - "http://statmt.org/wmt14/training-parallel-nc-v9.tgz" - "http://statmt.org/wmt10/training-giga-fren.tar" - "http://statmt.org/wmt14/test-full.tgz" -) -FILES=( - "training-parallel-europarl-v7.tgz" - "training-parallel-commoncrawl.tgz" - "training-parallel-un.tgz" - "training-parallel-nc-v9.tgz" - "training-giga-fren.tar" - "test-full.tgz" -) -CORPORA=( - "training/europarl-v7.fr-en" - "commoncrawl.fr-en" - "un/undoc.2000.fr-en" - "training/news-commentary-v9.fr-en" - "giga-fren.release2.fixed" -) - -if [ ! -d "$SCRIPTS" ]; then - echo "Please set SCRIPTS variable correctly to point to Moses scripts." - exit -fi - -src=en -tgt=fr -lang=en-fr -prep=wmt14_en_fr -tmp=$prep/tmp -orig=orig - -mkdir -p $orig $tmp $prep - -cd $orig - -for ((i=0;i<${#URLS[@]};++i)); do - file=${FILES[i]} - if [ -f $file ]; then - echo "$file already exists, skipping download" - else - url=${URLS[i]} - wget "$url" - if [ -f $file ]; then - echo "$url successfully downloaded." - else - echo "$url not successfully downloaded." - exit -1 - fi - if [ ${file: -4} == ".tgz" ]; then - tar zxvf $file - elif [ ${file: -4} == ".tar" ]; then - tar xvf $file - fi - fi -done - -gunzip giga-fren.release2.fixed.*.gz -cd .. - -echo "pre-processing train data..." -for l in $src $tgt; do - rm $tmp/train.tags.$lang.tok.$l - for f in "${CORPORA[@]}"; do - cat $orig/$f.$l | \ - perl $NORM_PUNC $l | \ - perl $REM_NON_PRINT_CHAR | \ - perl $TOKENIZER -threads 8 -a -l $l >> $tmp/train.tags.$lang.tok.$l - done -done - -echo "pre-processing test data..." -for l in $src $tgt; do - if [ "$l" == "$src" ]; then - t="src" - else - t="ref" - fi - grep '\s*//g' | \ - sed -e 's/\s*<\/seg>\s*//g' | \ - sed -e "s/\’/\'/g" | \ - perl $TOKENIZER -threads 8 -a -l $l > $tmp/test.$l - echo "" -done - -echo "splitting train and valid..." -for l in $src $tgt; do - awk '{if (NR%1333 == 0) print $0; }' $tmp/train.tags.$lang.tok.$l > $tmp/valid.$l - awk '{if (NR%1333 != 0) print $0; }' $tmp/train.tags.$lang.tok.$l > $tmp/train.$l -done - -TRAIN=$tmp/train.fr-en -BPE_CODE=$prep/code -rm -f $TRAIN -for l in $src $tgt; do - cat $tmp/train.$l >> $TRAIN -done - -echo "learn_bpe.py on ${TRAIN}..." -python $BPEROOT/learn_bpe.py -s $BPE_TOKENS < $TRAIN > $BPE_CODE - -for L in $src $tgt; do - for f in train.$L valid.$L test.$L; do - echo "apply_bpe.py to ${f}..." - python $BPEROOT/apply_bpe.py -c $BPE_CODE < $tmp/$f > $tmp/bpe.$f - done -done - -perl $CLEAN -ratio 1.5 $tmp/bpe.train $src $tgt $prep/train 1 250 -perl $CLEAN -ratio 1.5 $tmp/bpe.valid $src $tgt $prep/valid 1 250 - -for L in $src $tgt; do - cp $tmp/bpe.test.$L $prep/test.$L -done diff --git a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/criss/save_encoder.py b/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/criss/save_encoder.py deleted file mode 100644 index 24a842e4092663c79c92a299fa85747b7c0bed64..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/criss/save_encoder.py +++ /dev/null @@ -1,214 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Translate pre-processed data with a trained model. -""" - -import numpy as np -import torch -from fairseq import checkpoint_utils, options, progress_bar, tasks, utils -from fairseq.sequence_generator import EnsembleModel -from fairseq.utils import safe_hasattr - - -def get_avg_pool( - models, sample, prefix_tokens, src_dict, remove_bpe, has_langtok=False -): - model = EnsembleModel(models) - - # model.forward normally channels prev_output_tokens into the decoder - # separately, but SequenceGenerator directly calls model.encoder - encoder_input = { - k: v for k, v in sample["net_input"].items() if k != "prev_output_tokens" - } - - # compute the encoder output for each beam - encoder_outs = model.forward_encoder(encoder_input) - np_encoder_outs = encoder_outs[0].encoder_out.cpu().numpy().astype(np.float32) - encoder_mask = 1 - encoder_outs[0].encoder_padding_mask.cpu().numpy().astype( - np.float32 - ) - encoder_mask = np.expand_dims(encoder_mask.T, axis=2) - if has_langtok: - encoder_mask = encoder_mask[1:, :, :] - np_encoder_outs = np_encoder_outs[1, :, :] - masked_encoder_outs = encoder_mask * np_encoder_outs - avg_pool = (masked_encoder_outs / encoder_mask.sum(axis=0)).sum(axis=0) - return avg_pool - - -def main(args): - assert args.path is not None, "--path required for generation!" - assert ( - not args.sampling or args.nbest == args.beam - ), "--sampling requires --nbest to be equal to --beam" - assert ( - args.replace_unk is None or args.raw_text - ), "--replace-unk requires a raw text dataset (--raw-text)" - - args.beam = 1 - utils.import_user_module(args) - - if args.max_tokens is None: - args.max_tokens = 12000 - print(args) - use_cuda = torch.cuda.is_available() and not args.cpu - - # Load dataset splits - task = tasks.setup_task(args) - task.load_dataset(args.gen_subset) - - # Set dictionaries - try: - src_dict = getattr(task, "source_dictionary", None) - except NotImplementedError: - src_dict = None - tgt_dict = task.target_dictionary - - # Load ensemble - print("| loading model(s) from {}".format(args.path)) - models, _model_args = checkpoint_utils.load_model_ensemble( - args.path.split(":"), - arg_overrides=eval(args.model_overrides), - task=task, - ) - - # Optimize ensemble for generation - for model in models: - model.make_generation_fast_( - beamable_mm_beam_size=None if args.no_beamable_mm else args.beam, - need_attn=args.print_alignment, - ) - if args.fp16: - model.half() - if use_cuda: - model.cuda() - - # Load alignment dictionary for unknown word replacement - # (None if no unknown word replacement, empty if no path to align dictionary) - align_dict = utils.load_align_dict(args.replace_unk) - - # Load dataset (possibly sharded) - itr = task.get_batch_iterator( - dataset=task.dataset(args.gen_subset), - max_tokens=args.max_tokens, - max_positions=utils.resolve_max_positions( - task.max_positions(), - ), - ignore_invalid_inputs=args.skip_invalid_size_inputs_valid_test, - required_batch_size_multiple=args.required_batch_size_multiple, - num_shards=args.num_shards, - shard_id=args.shard_id, - num_workers=args.num_workers, - ).next_epoch_itr(shuffle=False) - - num_sentences = 0 - source_sentences = [] - shard_id = 0 - all_avg_pool = None - encoder_has_langtok = ( - safe_hasattr(task.args, "encoder_langtok") - and task.args.encoder_langtok is not None - and safe_hasattr(task.args, "lang_tok_replacing_bos_eos") - and not task.args.lang_tok_replacing_bos_eos - ) - with progress_bar.build_progress_bar(args, itr) as t: - for sample in t: - if sample is None: - print("Skipping None") - continue - sample = utils.move_to_cuda(sample) if use_cuda else sample - if "net_input" not in sample: - continue - - prefix_tokens = None - if args.prefix_size > 0: - prefix_tokens = sample["target"][:, : args.prefix_size] - - with torch.no_grad(): - avg_pool = get_avg_pool( - models, - sample, - prefix_tokens, - src_dict, - args.post_process, - has_langtok=encoder_has_langtok, - ) - if all_avg_pool is not None: - all_avg_pool = np.concatenate((all_avg_pool, avg_pool)) - else: - all_avg_pool = avg_pool - - if not isinstance(sample["id"], list): - sample_ids = sample["id"].tolist() - else: - sample_ids = sample["id"] - for i, sample_id in enumerate(sample_ids): - # Remove padding - src_tokens = utils.strip_pad( - sample["net_input"]["src_tokens"][i, :], tgt_dict.pad() - ) - - # Either retrieve the original sentences or regenerate them from tokens. - if align_dict is not None: - src_str = task.dataset(args.gen_subset).src.get_original_text( - sample_id - ) - else: - if src_dict is not None: - src_str = src_dict.string(src_tokens, args.post_process) - else: - src_str = "" - - if not args.quiet: - if src_dict is not None: - print("S-{}\t{}".format(sample_id, src_str)) - - source_sentences.append(f"{sample_id}\t{src_str}") - - num_sentences += sample["nsentences"] - if all_avg_pool.shape[0] >= 1000000: - with open( - f"{args.encoder_save_dir}/all_avg_pool.{args.source_lang}.{shard_id}", - "w", - ) as avg_pool_file: - all_avg_pool.tofile(avg_pool_file) - with open( - f"{args.encoder_save_dir}/sentences.{args.source_lang}.{shard_id}", - "w", - ) as sentence_file: - sentence_file.writelines(f"{line}\n" for line in source_sentences) - all_avg_pool = None - source_sentences = [] - shard_id += 1 - - if all_avg_pool is not None: - with open( - f"{args.encoder_save_dir}/all_avg_pool.{args.source_lang}.{shard_id}", "w" - ) as avg_pool_file: - all_avg_pool.tofile(avg_pool_file) - with open( - f"{args.encoder_save_dir}/sentences.{args.source_lang}.{shard_id}", "w" - ) as sentence_file: - sentence_file.writelines(f"{line}\n" for line in source_sentences) - return None - - -def cli_main(): - parser = options.get_generation_parser() - parser.add_argument( - "--encoder-save-dir", - default="", - type=str, - metavar="N", - help="directory to save encoder outputs", - ) - args = options.parse_args_and_arch(parser) - main(args) - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/discriminative_reranking_nmt/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/discriminative_reranking_nmt/__init__.py deleted file mode 100644 index 0278f6a27340c7ff7e207d09348483d1b0d3a100..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/discriminative_reranking_nmt/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from . import criterions, models, tasks # noqa diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/transformer/transformer_encoder.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/transformer/transformer_encoder.py deleted file mode 100644 index f007776a6f3b7e6731edc01d95aa24eed255d0e8..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/transformer/transformer_encoder.py +++ /dev/null @@ -1,341 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from typing import Dict, List, Optional - -import torch -import torch.nn as nn -from fairseq import utils -from fairseq.distributed import fsdp_wrap -from fairseq.models import FairseqEncoder -from fairseq.modules import ( - FairseqDropout, - LayerDropModuleList, - LayerNorm, - PositionalEmbedding, - SinusoidalPositionalEmbedding, -) -from fairseq.modules import transformer_layer -from fairseq.modules.checkpoint_activations import checkpoint_wrapper -from fairseq.modules.quant_noise import quant_noise as apply_quant_noise_ -from torch import Tensor -from fairseq.models.transformer import ( - TransformerConfig, -) - - -# rewrite name for backward compatibility in `make_generation_fast_` -def module_name_fordropout(module_name: str) -> str: - if module_name == 'TransformerEncoderBase': - return 'TransformerEncoder' - else: - return module_name - - -class TransformerEncoderBase(FairseqEncoder): - """ - Transformer encoder consisting of *cfg.encoder.layers* layers. Each layer - is a :class:`TransformerEncoderLayer`. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): encoding dictionary - embed_tokens (torch.nn.Embedding): input embedding - """ - - def __init__(self, cfg, dictionary, embed_tokens): - self.cfg = cfg - super().__init__(dictionary) - self.register_buffer("version", torch.Tensor([3])) - - self.dropout_module = FairseqDropout( - cfg.dropout, module_name=module_name_fordropout(self.__class__.__name__) - ) - self.encoder_layerdrop = cfg.encoder.layerdrop - - embed_dim = embed_tokens.embedding_dim - self.padding_idx = embed_tokens.padding_idx - self.max_source_positions = cfg.max_source_positions - - self.embed_tokens = embed_tokens - - self.embed_scale = 1.0 if cfg.no_scale_embedding else math.sqrt(embed_dim) - - self.embed_positions = ( - PositionalEmbedding( - cfg.max_source_positions, - embed_dim, - self.padding_idx, - learned=cfg.encoder.learned_pos, - ) - if not cfg.no_token_positional_embeddings - else None - ) - if cfg.layernorm_embedding: - self.layernorm_embedding = LayerNorm(embed_dim, export=cfg.export) - else: - self.layernorm_embedding = None - - if not cfg.adaptive_input and cfg.quant_noise.pq > 0: - self.quant_noise = apply_quant_noise_( - nn.Linear(embed_dim, embed_dim, bias=False), - cfg.quant_noise.pq, - cfg.quant_noise.pq_block_size, - ) - else: - self.quant_noise = None - - if self.encoder_layerdrop > 0.0: - self.layers = LayerDropModuleList(p=self.encoder_layerdrop) - else: - self.layers = nn.ModuleList([]) - self.layers.extend( - [self.build_encoder_layer(cfg) for i in range(cfg.encoder.layers)] - ) - self.num_layers = len(self.layers) - - if cfg.encoder.normalize_before: - self.layer_norm = LayerNorm(embed_dim, export=cfg.export) - else: - self.layer_norm = None - - def build_encoder_layer(self, cfg): - layer = transformer_layer.TransformerEncoderLayerBase(cfg) - checkpoint = cfg.checkpoint_activations - if checkpoint: - offload_to_cpu = cfg.offload_activations - layer = checkpoint_wrapper(layer, offload_to_cpu=offload_to_cpu) - # if we are checkpointing, enforce that FSDP always wraps the - # checkpointed layer, regardless of layer size - min_params_to_wrap = cfg.min_params_to_wrap if not checkpoint else 0 - layer = fsdp_wrap(layer, min_num_params=min_params_to_wrap) - return layer - - def forward_embedding( - self, src_tokens, token_embedding: Optional[torch.Tensor] = None - ): - # embed tokens and positions - if token_embedding is None: - token_embedding = self.embed_tokens(src_tokens) - x = embed = self.embed_scale * token_embedding - if self.embed_positions is not None: - x = embed + self.embed_positions(src_tokens) - if self.layernorm_embedding is not None: - x = self.layernorm_embedding(x) - x = self.dropout_module(x) - if self.quant_noise is not None: - x = self.quant_noise(x) - return x, embed - - def forward( - self, - src_tokens, - src_lengths: Optional[torch.Tensor] = None, - return_all_hiddens: bool = False, - token_embeddings: Optional[torch.Tensor] = None, - ): - """ - Args: - src_tokens (LongTensor): tokens in the source language of shape - `(batch, src_len)` - src_lengths (torch.LongTensor): lengths of each source sentence of - shape `(batch)` - return_all_hiddens (bool, optional): also return all of the - intermediate hidden states (default: False). - token_embeddings (torch.Tensor, optional): precomputed embeddings - default `None` will recompute embeddings - - Returns: - dict: - - **encoder_out** (Tensor): the last encoder layer's output of - shape `(src_len, batch, embed_dim)` - - **encoder_padding_mask** (ByteTensor): the positions of - padding elements of shape `(batch, src_len)` - - **encoder_embedding** (Tensor): the (scaled) embedding lookup - of shape `(batch, src_len, embed_dim)` - - **encoder_states** (List[Tensor]): all intermediate - hidden states of shape `(src_len, batch, embed_dim)`. - Only populated if *return_all_hiddens* is True. - """ - return self.forward_scriptable( - src_tokens, src_lengths, return_all_hiddens, token_embeddings - ) - - # TorchScript doesn't support super() method so that the scriptable Subclass - # can't access the base class model in Torchscript. - # Current workaround is to add a helper function with different name and - # call the helper function from scriptable Subclass. - def forward_scriptable( - self, - src_tokens, - src_lengths: Optional[torch.Tensor] = None, - return_all_hiddens: bool = False, - token_embeddings: Optional[torch.Tensor] = None, - ): - """ - Args: - src_tokens (LongTensor): tokens in the source language of shape - `(batch, src_len)` - src_lengths (torch.LongTensor): lengths of each source sentence of - shape `(batch)` - return_all_hiddens (bool, optional): also return all of the - intermediate hidden states (default: False). - token_embeddings (torch.Tensor, optional): precomputed embeddings - default `None` will recompute embeddings - - Returns: - dict: - - **encoder_out** (Tensor): the last encoder layer's output of - shape `(src_len, batch, embed_dim)` - - **encoder_padding_mask** (ByteTensor): the positions of - padding elements of shape `(batch, src_len)` - - **encoder_embedding** (Tensor): the (scaled) embedding lookup - of shape `(batch, src_len, embed_dim)` - - **encoder_states** (List[Tensor]): all intermediate - hidden states of shape `(src_len, batch, embed_dim)`. - Only populated if *return_all_hiddens* is True. - """ - # compute padding mask - encoder_padding_mask = src_tokens.eq(self.padding_idx) - has_pads = src_tokens.device.type == "xla" or encoder_padding_mask.any() - - x, encoder_embedding = self.forward_embedding(src_tokens, token_embeddings) - - # account for padding while computing the representation - if has_pads: - x = x * (1 - encoder_padding_mask.unsqueeze(-1).type_as(x)) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - encoder_states = [] - - if return_all_hiddens: - encoder_states.append(x) - - # encoder layers - for layer in self.layers: - x = layer( - x, encoder_padding_mask=encoder_padding_mask if has_pads else None - ) - if return_all_hiddens: - assert encoder_states is not None - encoder_states.append(x) - - if self.layer_norm is not None: - x = self.layer_norm(x) - - # The Pytorch Mobile lite interpreter does not supports returning NamedTuple in - # `forward` so we use a dictionary instead. - # TorchScript does not support mixed values so the values are all lists. - # The empty list is equivalent to None. - src_lengths = src_tokens.ne(self.padding_idx).sum(dim=1, dtype=torch.int32).reshape(-1, 1).contiguous() - return { - "encoder_out": [x], # T x B x C - "encoder_padding_mask": [encoder_padding_mask], # B x T - "encoder_embedding": [encoder_embedding], # B x T x C - "encoder_states": encoder_states, # List[T x B x C] - "src_tokens": [], - "src_lengths": [src_lengths], - } - - @torch.jit.export - def reorder_encoder_out(self, encoder_out: Dict[str, List[Tensor]], new_order): - """ - Reorder encoder output according to *new_order*. - - Args: - encoder_out: output from the ``forward()`` method - new_order (LongTensor): desired order - - Returns: - *encoder_out* rearranged according to *new_order* - """ - if len(encoder_out["encoder_out"]) == 0: - new_encoder_out = [] - else: - new_encoder_out = [encoder_out["encoder_out"][0].index_select(1, new_order)] - if len(encoder_out["encoder_padding_mask"]) == 0: - new_encoder_padding_mask = [] - else: - new_encoder_padding_mask = [ - encoder_out["encoder_padding_mask"][0].index_select(0, new_order) - ] - if len(encoder_out["encoder_embedding"]) == 0: - new_encoder_embedding = [] - else: - new_encoder_embedding = [ - encoder_out["encoder_embedding"][0].index_select(0, new_order) - ] - - if len(encoder_out["src_tokens"]) == 0: - src_tokens = [] - else: - src_tokens = [(encoder_out["src_tokens"][0]).index_select(0, new_order)] - - if len(encoder_out["src_lengths"]) == 0: - src_lengths = [] - else: - src_lengths = [(encoder_out["src_lengths"][0]).index_select(0, new_order)] - - encoder_states = encoder_out["encoder_states"] - if len(encoder_states) > 0: - for idx, state in enumerate(encoder_states): - encoder_states[idx] = state.index_select(1, new_order) - - return { - "encoder_out": new_encoder_out, # T x B x C - "encoder_padding_mask": new_encoder_padding_mask, # B x T - "encoder_embedding": new_encoder_embedding, # B x T x C - "encoder_states": encoder_states, # List[T x B x C] - "src_tokens": src_tokens, # B x T - "src_lengths": src_lengths, # B x 1 - } - - def max_positions(self): - """Maximum input length supported by the encoder.""" - if self.embed_positions is None: - return self.max_source_positions - return min(self.max_source_positions, self.embed_positions.max_positions) - - def upgrade_state_dict_named(self, state_dict, name): - """Upgrade a (possibly old) state dict for new versions of fairseq.""" - if isinstance(self.embed_positions, SinusoidalPositionalEmbedding): - weights_key = "{}.embed_positions.weights".format(name) - if weights_key in state_dict: - print("deleting {0}".format(weights_key)) - del state_dict[weights_key] - state_dict[ - "{}.embed_positions._float_tensor".format(name) - ] = torch.FloatTensor(1) - for i in range(self.num_layers): - # update layer norms - self.layers[i].upgrade_state_dict_named( - state_dict, "{}.layers.{}".format(name, i) - ) - - version_key = "{}.version".format(name) - if utils.item(state_dict.get(version_key, torch.Tensor([1]))[0]) < 2: - # earlier checkpoints did not normalize after the stack of layers - self.layer_norm = None - self.normalize = False - state_dict[version_key] = torch.Tensor([1]) - return state_dict - - -class TransformerEncoder(TransformerEncoderBase): - def __init__(self, args, dictionary, embed_tokens): - self.args = args - super().__init__( - TransformerConfig.from_namespace(args), - dictionary, - embed_tokens, - ) - - def build_encoder_layer(self, args): - return super().build_encoder_layer( - TransformerConfig.from_namespace(args), - ) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/sequence_scorer.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/sequence_scorer.py deleted file mode 100644 index 411d4df4445ef8dd3f1907ad56f9de6943d1fed8..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/sequence_scorer.py +++ /dev/null @@ -1,153 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import sys - -import torch -from fairseq import utils - - -class SequenceScorer(object): - """Scores the target for a given source sentence.""" - - def __init__( - self, - tgt_dict, - softmax_batch=None, - compute_alignment=False, - eos=None, - symbols_to_strip_from_output=None, - ): - self.pad = tgt_dict.pad() - self.eos = tgt_dict.eos() if eos is None else eos - self.softmax_batch = softmax_batch or sys.maxsize - assert self.softmax_batch > 0 - self.compute_alignment = compute_alignment - self.symbols_to_strip_from_output = ( - symbols_to_strip_from_output.union({self.eos}) - if symbols_to_strip_from_output is not None - else {self.eos} - ) - - @torch.no_grad() - def generate(self, models, sample, **kwargs): - """Score a batch of translations.""" - net_input = sample["net_input"] - - def batch_for_softmax(dec_out, target): - # assumes decoder_out[0] is the only thing needed (may not be correct for future models!) - first, rest = dec_out[0], dec_out[1:] - bsz, tsz, dim = first.shape - if bsz * tsz < self.softmax_batch: - yield dec_out, target, True - else: - flat = first.contiguous().view(1, -1, dim) - flat_tgt = target.contiguous().view(flat.shape[:-1]) - s = 0 - while s < flat.size(1): - e = s + self.softmax_batch - yield (flat[:, s:e],) + rest, flat_tgt[:, s:e], False - s = e - - def gather_target_probs(probs, target): - probs = probs.gather( - dim=2, - index=target.unsqueeze(-1), - ) - return probs - - orig_target = sample["target"] - - # compute scores for each model in the ensemble - avg_probs = None - avg_attn = None - for model in models: - model.eval() - decoder_out = model(**net_input) - attn = decoder_out[1] if len(decoder_out) > 1 else None - if type(attn) is dict: - attn = attn.get("attn", None) - - batched = batch_for_softmax(decoder_out, orig_target) - probs, idx = None, 0 - for bd, tgt, is_single in batched: - sample["target"] = tgt - curr_prob = model.get_normalized_probs( - bd, log_probs=len(models) == 1, sample=sample - ).data - if is_single: - probs = gather_target_probs(curr_prob, orig_target) - else: - if probs is None: - probs = curr_prob.new(orig_target.numel()) - step = curr_prob.size(0) * curr_prob.size(1) - end = step + idx - tgt_probs = gather_target_probs( - curr_prob.view(tgt.shape + (curr_prob.size(-1),)), tgt - ) - probs[idx:end] = tgt_probs.view(-1) - idx = end - sample["target"] = orig_target - - probs = probs.view(sample["target"].shape) - - if avg_probs is None: - avg_probs = probs - else: - avg_probs.add_(probs) - if attn is not None: - if torch.is_tensor(attn): - attn = attn.data - else: - attn = attn[0] - if avg_attn is None: - avg_attn = attn - else: - avg_attn.add_(attn) - if len(models) > 1: - avg_probs.div_(len(models)) - avg_probs.log_() - if avg_attn is not None: - avg_attn.div_(len(models)) - - bsz = avg_probs.size(0) - hypos = [] - start_idxs = sample["start_indices"] if "start_indices" in sample else [0] * bsz - for i in range(bsz): - # remove padding from ref - ref = ( - utils.strip_pad(sample["target"][i, start_idxs[i] :], self.pad) - if sample["target"] is not None - else None - ) - tgt_len = ref.numel() - avg_probs_i = avg_probs[i][start_idxs[i] : start_idxs[i] + tgt_len] - score_i = avg_probs_i.sum() / tgt_len - if avg_attn is not None: - avg_attn_i = avg_attn[i] - if self.compute_alignment: - alignment = utils.extract_hard_alignment( - avg_attn_i, - sample["net_input"]["src_tokens"][i], - sample["target"][i], - self.pad, - self.eos, - ) - else: - alignment = None - else: - avg_attn_i = alignment = None - hypos.append( - [ - { - "tokens": ref, - "score": score_i, - "attention": avg_attn_i, - "alignment": alignment, - "positional_scores": avg_probs_i, - } - ] - ) - return hypos diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/modeling/test_mmdet.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/modeling/test_mmdet.py deleted file mode 100644 index a743b0b67d5ab664257040621d28c1b1b4451709..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/modeling/test_mmdet.py +++ /dev/null @@ -1,186 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import unittest - -from detectron2.layers import ShapeSpec -from detectron2.modeling.mmdet_wrapper import MMDetBackbone, MMDetDetector - -try: - import mmdet.models # noqa - - HAS_MMDET = True -except ImportError: - HAS_MMDET = False - - -@unittest.skipIf(not HAS_MMDET, "mmdet not available") -class TestMMDetWrapper(unittest.TestCase): - def test_backbone(self): - MMDetBackbone( - backbone=dict( - type="DetectoRS_ResNet", - conv_cfg=dict(type="ConvAWS"), - sac=dict(type="SAC", use_deform=True), - stage_with_sac=(False, True, True, True), - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type="BN", requires_grad=True), - norm_eval=True, - style="pytorch", - ), - neck=dict( - type="FPN", - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5, - ), - # skip pretrained model for tests - # pretrained_backbone="torchvision://resnet50", - output_shapes=[ShapeSpec(channels=256, stride=s) for s in [4, 8, 16, 32, 64]], - output_names=["p2", "p3", "p4", "p5", "p6"], - ) - - def test_detector(self): - # a basic R50 Mask R-CNN - MMDetDetector( - detector=dict( - type="MaskRCNN", - backbone=dict( - type="ResNet", - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type="BN", requires_grad=True), - norm_eval=True, - style="pytorch", - # skip pretrained model for tests - # init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')) - ), - neck=dict( - type="FPN", in_channels=[256, 512, 1024, 2048], out_channels=256, num_outs=5 - ), - rpn_head=dict( - type="RPNHead", - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type="AnchorGenerator", - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64], - ), - bbox_coder=dict( - type="DeltaXYWHBBoxCoder", - target_means=[0.0, 0.0, 0.0, 0.0], - target_stds=[1.0, 1.0, 1.0, 1.0], - ), - loss_cls=dict(type="CrossEntropyLoss", use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type="L1Loss", loss_weight=1.0), - ), - roi_head=dict( - type="StandardRoIHead", - bbox_roi_extractor=dict( - type="SingleRoIExtractor", - roi_layer=dict(type="RoIAlign", output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32], - ), - bbox_head=dict( - type="Shared2FCBBoxHead", - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type="DeltaXYWHBBoxCoder", - target_means=[0.0, 0.0, 0.0, 0.0], - target_stds=[0.1, 0.1, 0.2, 0.2], - ), - reg_class_agnostic=False, - loss_cls=dict(type="CrossEntropyLoss", use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type="L1Loss", loss_weight=1.0), - ), - mask_roi_extractor=dict( - type="SingleRoIExtractor", - roi_layer=dict(type="RoIAlign", output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32], - ), - mask_head=dict( - type="FCNMaskHead", - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=80, - loss_mask=dict(type="CrossEntropyLoss", use_mask=True, loss_weight=1.0), - ), - ), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type="MaxIoUAssigner", - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1, - ), - sampler=dict( - type="RandomSampler", - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False, - ), - allowed_border=-1, - pos_weight=-1, - debug=False, - ), - rpn_proposal=dict( - nms_pre=2000, - max_per_img=1000, - nms=dict(type="nms", iou_threshold=0.7), - min_bbox_size=0, - ), - rcnn=dict( - assigner=dict( - type="MaxIoUAssigner", - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=True, - ignore_iof_thr=-1, - ), - sampler=dict( - type="RandomSampler", - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True, - ), - mask_size=28, - pos_weight=-1, - debug=False, - ), - ), - test_cfg=dict( - rpn=dict( - nms_pre=1000, - max_per_img=1000, - nms=dict(type="nms", iou_threshold=0.7), - min_bbox_size=0, - ), - rcnn=dict( - score_thr=0.05, - nms=dict(type="nms", iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5, - ), - ), - ), - pixel_mean=[1, 2, 3], - pixel_std=[1, 2, 3], - ) diff --git a/spaces/PAIR/Text2Video-Zero/annotator/openpose/__init__.py b/spaces/PAIR/Text2Video-Zero/annotator/openpose/__init__.py deleted file mode 100644 index 8c26f1b37dae854f51da938da2fa67a8ef48ce5a..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/openpose/__init__.py +++ /dev/null @@ -1,44 +0,0 @@ -import os -os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE" - -import torch -import numpy as np -from . import util -from .body import Body -from .hand import Hand -from annotator.util import annotator_ckpts_path - - -body_model_path = "https://huggingface.co/lllyasviel/ControlNet/resolve/main/annotator/ckpts/body_pose_model.pth" -hand_model_path = "https://huggingface.co/lllyasviel/ControlNet/resolve/main/annotator/ckpts/hand_pose_model.pth" - - -class OpenposeDetector: - def __init__(self): - body_modelpath = os.path.join(annotator_ckpts_path, "body_pose_model.pth") - hand_modelpath = os.path.join(annotator_ckpts_path, "hand_pose_model.pth") - - if not os.path.exists(hand_modelpath): - from basicsr.utils.download_util import load_file_from_url - load_file_from_url(body_model_path, model_dir=annotator_ckpts_path) - load_file_from_url(hand_model_path, model_dir=annotator_ckpts_path) - - self.body_estimation = Body(body_modelpath) - self.hand_estimation = Hand(hand_modelpath) - - def __call__(self, oriImg, hand=False): - oriImg = oriImg[:, :, ::-1].copy() - with torch.no_grad(): - candidate, subset = self.body_estimation(oriImg) - canvas = np.zeros_like(oriImg) - canvas = util.draw_bodypose(canvas, candidate, subset) - if hand: - hands_list = util.handDetect(candidate, subset, oriImg) - all_hand_peaks = [] - for x, y, w, is_left in hands_list: - peaks = self.hand_estimation(oriImg[y:y+w, x:x+w, :]) - peaks[:, 0] = np.where(peaks[:, 0] == 0, peaks[:, 0], peaks[:, 0] + x) - peaks[:, 1] = np.where(peaks[:, 1] == 0, peaks[:, 1], peaks[:, 1] + y) - all_hand_peaks.append(peaks) - canvas = util.draw_handpose(canvas, all_hand_peaks) - return canvas, dict(candidate=candidate.tolist(), subset=subset.tolist()) diff --git a/spaces/PaddlePaddle/solov2/README.md b/spaces/PaddlePaddle/solov2/README.md deleted file mode 100644 index daa9c10f29f8a1741cf51b5f9327c048c63000d5..0000000000000000000000000000000000000000 --- a/spaces/PaddlePaddle/solov2/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Solov2 -emoji: 📉 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.3 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/PhucBui/demo/README.md b/spaces/PhucBui/demo/README.md deleted file mode 100644 index 0a186c9db487f601226c883a85b315aad5461d5c..0000000000000000000000000000000000000000 --- a/spaces/PhucBui/demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Demo -emoji: 🐨 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Potanin/12345/lib/infer_pack/attentions.py b/spaces/Potanin/12345/lib/infer_pack/attentions.py deleted file mode 100644 index 05501be1871643f78dddbeaa529c96667031a8db..0000000000000000000000000000000000000000 --- a/spaces/Potanin/12345/lib/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from lib.infer_pack import commons -from lib.infer_pack import modules -from lib.infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/RMXK/RVC_HFF/lib/uvr5_pack/lib_v5/nets_123821KB.py b/spaces/RMXK/RVC_HFF/lib/uvr5_pack/lib_v5/nets_123821KB.py deleted file mode 100644 index becbfae85683a13bbb19d3ea6c840da24e61e01e..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/lib/uvr5_pack/lib_v5/nets_123821KB.py +++ /dev/null @@ -1,122 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from . import layers_123821KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 32) - self.stg1_high_band_net = BaseASPPNet(2, 32) - - self.stg2_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(16, 32) - - self.stg3_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(32, 64) - - self.out = nn.Conv2d(64, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(32, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(32, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/Ragnov/STT-Grammar-Checker/app.py b/spaces/Ragnov/STT-Grammar-Checker/app.py deleted file mode 100644 index 353a189c1fcc583574b5d7dd059ed7abf1045c03..0000000000000000000000000000000000000000 --- a/spaces/Ragnov/STT-Grammar-Checker/app.py +++ /dev/null @@ -1,166 +0,0 @@ -# Module Imports -from pytube import YouTube -import whisper -import gradio as gr -import time -import re -from happytransformer import HappyTextToText, TTSettings -from difflib import Differ - -STTmodel = whisper.load_model("base.en") -GCmodel = HappyTextToText("T5", "Ragnov/T5-Base-Grammar-Checker") -args = TTSettings(num_beams=5, min_length=1) - -# Functions -def transcribe(file): - options = dict(task="transcribe", best_of=5) - text = STTmodel.transcribe(file, **options)["text"] - return text.strip() - -def get_filename(file_obj): - return file_obj.orig_name - -def inference(link): - yt = YouTube(link) - path = yt.streams.filter(only_audio=True)[0].download(filename="audio.mp4") - options = whisper.DecodingOptions(without_timestamps=True) - results = STTmodel.transcribe(path) - return results['text'] - -def populate_metadata(link): - yt = YouTube(link) - return yt.thumbnail_url, yt.title - -def transcribe_file(file): - options = dict(task="transcribe", best_of=5) - file = get_filename(file) - text = STTmodel.transcribe(file, **options)["text"] - return text.strip() - -def real_time_transcribe(audio, state=""): - time.sleep(2) - text = STTmodel.transcribe(audio)["text"] - state += text + " " - return state, state - -def paragraph_to_sentences(paragraph): - """ - This function takes a paragraph as input and returns a list of sentences. - - Args: - paragraph (str): The paragraph to be converted to a list of sentences. - - Returns: - list: A list of sentences extracted from the paragraph. - """ - # Split the paragraph into sentences using a period, exclamation mark or question mark as the delimiter. - sentences = re.split(r'(?<=[^A-Z].[.?!]) +(?=[A-Z])|(?<=[^A-Z][!]) +(?=[A-Z])', paragraph) - - # Remove any leading or trailing spaces from each sentence. - sentences = [sentence.strip() for sentence in sentences] - - return sentences - -def sentences_to_paragraph(sentences): - final_result = "" - for num, sentence in enumerate(sentences): - result = GCmodel.generate_text("grammar: "+ sentence, args=args) - final_result += result.text - if num < len(sentences) - 1: - final_result += " " - - return final_result - -# Function that takes transcribed result and gramify it -def gramify(paragraph): - result_1 = paragraph_to_sentences(paragraph) - final_result = sentences_to_paragraph(result_1) - return final_result - -# Function that takes transcribed text for its first inpu -def diff_texts(text1, text2): - """ - This function takes transcribed text for its first input - and grammatically corrected text as its second input which return the difference - of the two text. - """ - d = Differ() - return [ - (token[2:], token[0] if token[0] != " " else None) - for token in d.compare(text1, text2) - ] -res_diff = [] -# Gradio Blocks -demo = gr.Blocks() -with demo: - gr.Markdown("""

    Speech To Text Grammar Checker

    """) - with gr.Tabs(): - with gr.TabItem("Voice Record"): - with gr.Row(): - audio = gr.Audio(show_label=False,source="microphone",type="filepath") - text_output1 = gr.Textbox(label="Transcription", placeholder="Text Output") - with gr.Row(): - transcribe_button1 = gr.Button("Transcribe") - with gr.Row(): - Grammar_text_output1 = gr.Textbox(label="Grammatically Corrected Text", placeholder="Text Output") - with gr.Row(): - Diff_text_output1 = gr.HighlightedText(label="Text Difference",combine_adjacent=True,value=res_diff).style(color_map={"+": "green", "-": "red"}) - with gr.TabItem("Upload File"): - with gr.Row(): - file_upload = gr.File() - text_output2 = gr.Textbox(label="Transcription", placeholder="Text Output") - with gr.Row(): - transcribe_button2 = gr.Button("Transcribe") - with gr.Row(): - Grammar_text_output2 = gr.Textbox(label="Grammatically Corrected Text", placeholder="Text Output") - with gr.Row(): - Diff_text_output2 = gr.HighlightedText(label="Text Difference",combine_adjacent=True,value=res_diff).style(color_map={"+": "green", "-": "red"}) - with gr.TabItem("Youtube Link"): - with gr.Box(): - link = gr.Textbox(label="YouTube Link") - with gr.Row().style(mobile_collapse=False, equal_height=True): - title = gr.Label(label="Video Title", placeholder="Title") - img = gr.Image(label="Thumbnail") - text_link_output = gr.Textbox(label="Transcription", placeholder="Text Output",lines=5) - with gr.Row().style(mobile_collapse=False, equal_height=True): - transcribe_button3 = gr.Button("Transcribe") - with gr.Row(): - Grammar_text_output3 = gr.Textbox(label="Grammatically Corrected Text", placeholder="Text Output") - with gr.Row().style(mobile_collapse=False, equal_height=True): - Diff_text_output3 = gr.HighlightedText(label="Text Difference",combine_adjacent=True,value=res_diff).style(color_map={"+": "green", "-": "red"}) - gr.Markdown("""

    Not Satisfied with the result?
    - Click here to help us make it better. -

    """) - - with gr.Accordion("About",open=False): - gr.Markdown(""" -

    Thesis System presented by

    - • Daniel L. Espinola
    - • Jhon Vincent A. Gupo
    - • Ryan M. Ibay

    - In partial fulfillment of the requirements for the degree
    - Bachelor of Science in Computer Science Specialized in Intelligent Systems
    - Laguna State Polytechnic University - Los Baños Campus .

    - We would also like to thank our fellow adviser and subject specialist for their guidance in making this idea a reality.
    - • Crisanto F. Gulay - Adviser
    - • Gene Marck B. Catedrilla - Subject Specialist
    -

    - """) - link.change(populate_metadata, inputs=[link], outputs=[img, title]) - - # Transcription - transcribe_button1.click(transcribe, inputs=audio, outputs=text_output1) - transcribe_button2.click(transcribe_file, inputs=file_upload, outputs=text_output2) - transcribe_button3.click(inference, inputs=link, outputs=text_link_output) - - # Gramify - text_output1.change(gramify,inputs=text_output1,outputs=Grammar_text_output1) - text_output2.change(gramify,inputs=text_output2,outputs=Grammar_text_output2) - text_link_output.change(gramify, inputs=text_link_output ,outputs=Grammar_text_output3) - - # For Text Difference - Grammar_text_output1.change(diff_texts,inputs=[text_output1,Grammar_text_output1],outputs=Diff_text_output1) - Grammar_text_output2.change(diff_texts,inputs=[text_output2,Grammar_text_output2],outputs=Diff_text_output2) - Grammar_text_output3.change(diff_texts,inputs=[text_link_output,Grammar_text_output3],outputs=Diff_text_output3) - -demo.launch() \ No newline at end of file diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/actions.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/actions.py deleted file mode 100644 index f72c66e743146c7a5b70a5440e9ab5459f10245b..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/actions.py +++ /dev/null @@ -1,207 +0,0 @@ -# actions.py - -from .exceptions import ParseException -from .util import col - - -class OnlyOnce: - """ - Wrapper for parse actions, to ensure they are only called once. - """ - - def __init__(self, method_call): - from .core import _trim_arity - - self.callable = _trim_arity(method_call) - self.called = False - - def __call__(self, s, l, t): - if not self.called: - results = self.callable(s, l, t) - self.called = True - return results - raise ParseException(s, l, "OnlyOnce obj called multiple times w/out reset") - - def reset(self): - """ - Allow the associated parse action to be called once more. - """ - - self.called = False - - -def match_only_at_col(n): - """ - Helper method for defining parse actions that require matching at - a specific column in the input text. - """ - - def verify_col(strg, locn, toks): - if col(locn, strg) != n: - raise ParseException(strg, locn, "matched token not at column {}".format(n)) - - return verify_col - - -def replace_with(repl_str): - """ - Helper method for common parse actions that simply return - a literal value. Especially useful when used with - :class:`transform_string` (). - - Example:: - - num = Word(nums).set_parse_action(lambda toks: int(toks[0])) - na = one_of("N/A NA").set_parse_action(replace_with(math.nan)) - term = na | num - - term[1, ...].parse_string("324 234 N/A 234") # -> [324, 234, nan, 234] - """ - return lambda s, l, t: [repl_str] - - -def remove_quotes(s, l, t): - """ - Helper parse action for removing quotation marks from parsed - quoted strings. - - Example:: - - # by default, quotation marks are included in parsed results - quoted_string.parse_string("'Now is the Winter of our Discontent'") # -> ["'Now is the Winter of our Discontent'"] - - # use remove_quotes to strip quotation marks from parsed results - quoted_string.set_parse_action(remove_quotes) - quoted_string.parse_string("'Now is the Winter of our Discontent'") # -> ["Now is the Winter of our Discontent"] - """ - return t[0][1:-1] - - -def with_attribute(*args, **attr_dict): - """ - Helper to create a validating parse action to be used with start - tags created with :class:`make_xml_tags` or - :class:`make_html_tags`. Use ``with_attribute`` to qualify - a starting tag with a required attribute value, to avoid false - matches on common tags such as ```` or ``
    ``. - - Call ``with_attribute`` with a series of attribute names and - values. Specify the list of filter attributes names and values as: - - - keyword arguments, as in ``(align="right")``, or - - as an explicit dict with ``**`` operator, when an attribute - name is also a Python reserved word, as in ``**{"class":"Customer", "align":"right"}`` - - a list of name-value tuples, as in ``(("ns1:class", "Customer"), ("ns2:align", "right"))`` - - For attribute names with a namespace prefix, you must use the second - form. Attribute names are matched insensitive to upper/lower case. - - If just testing for ``class`` (with or without a namespace), use - :class:`with_class`. - - To verify that the attribute exists, but without specifying a value, - pass ``with_attribute.ANY_VALUE`` as the value. - - Example:: - - html = ''' -
    - Some text -
    1 4 0 1 0
    -
    1,3 2,3 1,1
    -
    this has no type
    -
    - - ''' - div,div_end = make_html_tags("div") - - # only match div tag having a type attribute with value "grid" - div_grid = div().set_parse_action(with_attribute(type="grid")) - grid_expr = div_grid + SkipTo(div | div_end)("body") - for grid_header in grid_expr.search_string(html): - print(grid_header.body) - - # construct a match with any div tag having a type attribute, regardless of the value - div_any_type = div().set_parse_action(with_attribute(type=with_attribute.ANY_VALUE)) - div_expr = div_any_type + SkipTo(div | div_end)("body") - for div_header in div_expr.search_string(html): - print(div_header.body) - - prints:: - - 1 4 0 1 0 - - 1 4 0 1 0 - 1,3 2,3 1,1 - """ - if args: - attrs = args[:] - else: - attrs = attr_dict.items() - attrs = [(k, v) for k, v in attrs] - - def pa(s, l, tokens): - for attrName, attrValue in attrs: - if attrName not in tokens: - raise ParseException(s, l, "no matching attribute " + attrName) - if attrValue != with_attribute.ANY_VALUE and tokens[attrName] != attrValue: - raise ParseException( - s, - l, - "attribute {!r} has value {!r}, must be {!r}".format( - attrName, tokens[attrName], attrValue - ), - ) - - return pa - - -with_attribute.ANY_VALUE = object() - - -def with_class(classname, namespace=""): - """ - Simplified version of :class:`with_attribute` when - matching on a div class - made difficult because ``class`` is - a reserved word in Python. - - Example:: - - html = ''' -
    - Some text -
    1 4 0 1 0
    -
    1,3 2,3 1,1
    -
    this <div> has no class
    -
    - - ''' - div,div_end = make_html_tags("div") - div_grid = div().set_parse_action(with_class("grid")) - - grid_expr = div_grid + SkipTo(div | div_end)("body") - for grid_header in grid_expr.search_string(html): - print(grid_header.body) - - div_any_type = div().set_parse_action(with_class(withAttribute.ANY_VALUE)) - div_expr = div_any_type + SkipTo(div | div_end)("body") - for div_header in div_expr.search_string(html): - print(div_header.body) - - prints:: - - 1 4 0 1 0 - - 1 4 0 1 0 - 1,3 2,3 1,1 - """ - classattr = "{}:class".format(namespace) if namespace else "class" - return with_attribute(**{classattr: classname}) - - -# pre-PEP8 compatibility symbols -replaceWith = replace_with -removeQuotes = remove_quotes -withAttribute = with_attribute -withClass = with_class -matchOnlyAtCol = match_only_at_col diff --git a/spaces/Raspberry-ai/main/raspberry_flagging.py b/spaces/Raspberry-ai/main/raspberry_flagging.py deleted file mode 100644 index 5d575af8245e239841d8b4b7602990891fdf10de..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/raspberry_flagging.py +++ /dev/null @@ -1,184 +0,0 @@ -import csv -import datetime -import time -import io -import json -import os -import sys -import gradio as gr -import subprocess - -from gradio import encryptor, utils -from gradio.flagging import FlaggingCallback, _get_dataset_features_info -from gradio.components import IOComponent -from typing import TYPE_CHECKING, Any, List, Optional - -from huggingface_hub.utils import run_subprocess - - -""" - This class is forked from https://github.com/gradio-app/gradio/blob/master/gradio/flagging.py - -""" - -class RaspberryHuggingFaceDatasetSaver(FlaggingCallback): - """ - A FlaggingCallback that saves flagged data to a HuggingFace dataset. - """ - - def __init__( - self, - hf_token: str, - dataset_url: str, - repo_id: str, - private: bool = True, - ): - """ - Parameters: - hf_token: The HuggingFace token to use to create (and write the flagged sample to) the HuggingFace dataset. - dataset_name: The name of the dataset to save the data to, e.g. "image-classifier-1" - organization: The organization to save the dataset under. The hf_token must provide write access to this organization. If not provided, saved under the name of the user corresponding to the hf_token. - private: Whether the dataset should be private (defaults to True). - """ - self.hf_token = hf_token - self.dataset_url = dataset_url - self.dataset_name = repo_id - self.dataset_private = private - csv.field_size_limit(int(sys.maxsize/10)) # https://stackoverflow.com/questions/15063936/csv-error-field-larger-than-field-limit-131072 - - def setup(self, components: List[IOComponent], flagging_dir: str): - """ - Params: - flagging_dir (str): local directory where the dataset is cloned, - updated, and pushed from. - """ - try: - import huggingface_hub - except (ImportError, ModuleNotFoundError): - raise ImportError( - "Package `huggingface_hub` not found is needed " - "for HuggingFaceDatasetSaver. Try 'pip install huggingface_hub'." - ) - - # Wrap in try-catch ? - path_to_dataset_repo = huggingface_hub.create_repo( - repo_id=self.dataset_name, - token=self.hf_token, - private=self.dataset_private, - repo_type="dataset", - exist_ok=True, - ) - self.path_to_dataset_repo = path_to_dataset_repo # e.g. "https://huggingface.co/datasets/abidlabs/test-audio-10" - # self.path_to_dataset_repo = self.dataset_url - self.components = components - self.flagging_dir = flagging_dir - self.dataset_dir = os.path.join(flagging_dir, self.dataset_name) - - print('dataset_dir: {} exists: {}'.format(self.dataset_dir, os.path.exists(self.dataset_dir))) - - try: - print("running `git lfs update --force` subprocess") - - # Without the git lfs update call, the Repository call below fails with a "Hook already exists: pre-push" error. - subprocess.run(['git', 'lfs', 'update', '--force'], capture_output=True, text=True) - - # In case git lfs update call above fails try the following line. - # call_result = subprocess.run(['rm', '-rf', '.git/hooks/pre-push'], capture_output=True, text=True) - - except subprocess.CalledProcessError as e: - output = e.output - print("subprocess output except: ", output) - - self.repo = huggingface_hub.Repository( - local_dir=self.dataset_dir, - clone_from=self.path_to_dataset_repo, - repo_type="dataset", - use_auth_token=self.hf_token, - ) - self.repo.git_pull(lfs=True) - - # Should filename be user-specified? - self.log_file = os.path.join(self.dataset_dir, "data.csv") - self.infos_file = os.path.join(self.dataset_dir, "dataset_infos.json") - - def _create_dated_directory_path(): - print("Unused method") - # if dataset_dir_exists: - # datetime_dir = os.makedirs(os.path.join(time.strftime("/%Y/%m/%d"), self.dataset_dir)) - # print("datetime_dir:", datetime_dir) - # self.dataset_dir = datetime_dir - - def flag( - self, - flag_data: List[Any], - flag_option: Optional[str] = None, - flag_index: Optional[int] = None, - username: Optional[str] = None, - ) -> int: - print("starting flag()") - self.repo.git_pull(lfs=True) - is_new = not os.path.exists(self.log_file) - - - # Gradio source code assumes the flag call always contains the same components and flag data - # This is not the case for raspberry, for example inference calls can be with or without input images. - # Below is necessary to account for variable input to be flagged - - # self.components = [component for component in self.components if component is not None and component.value is not None] - # flag_data = [data for data in flag_data if data] - - components_size = len(self.components) - flag_data_size = len(flag_data) - if components_size != flag_data_size: - print('Size of components: [{}] must be the same as the size of flagged data [{}]'.format(components_size, flag_data_size)) - else: - print('Size of components and flagged data are the same: {}'.format(components_size)) - - - print("log file is new: ", is_new) - with open(self.log_file, "a", newline="", encoding="utf-8") as csvfile: - writer = csv.writer(csvfile) - - # File previews for certain input and output types - infos, file_preview_types, headers = _get_dataset_features_info( - is_new, self.components - ) - - # Generate the headers and dataset_infos - print("generating headers and dataset_infos") - if is_new: - writer.writerow(utils.sanitize_list_for_csv(headers)) - - # Generate the row corresponding to the flagged sample - csv_data = [] - for component, sample in zip(self.components, flag_data): - print("flag data sample:", sample) - if component.label == "Input Image" and not sample: - # Skip flagging the input image if it's not set. Deserializing an unset input image breaks. - continue - - save_dir = os.path.join( - self.dataset_dir, - utils.strip_invalid_filename_characters(component.label), - ) - filepath = component.deserialize(sample, save_dir, None) - csv_data.append(filepath) - if isinstance(component, tuple(file_preview_types)): - csv_data.append( - "{}/resolve/main/{}".format(self.path_to_dataset_repo, filepath) - ) - csv_data.append(flag_option if flag_option is not None else "") - print("writing row") - writer.writerow(utils.sanitize_list_for_csv(csv_data)) - - if is_new: - json.dump(infos, open(self.infos_file, "w")) - - with open(self.log_file, "r", encoding="utf-8") as csvfile: - line_count = len([None for row in csv.reader(csvfile)]) - 1 - - print("pushing to da hub...") - self.repo.push_to_hub(commit_message="Flagged sample #{}".format(line_count)) - print("...pushed to da hub") - - return line_count \ No newline at end of file diff --git a/spaces/Rbrq/DeticChatGPT/tools/remove_lvis_rare.py b/spaces/Rbrq/DeticChatGPT/tools/remove_lvis_rare.py deleted file mode 100644 index 06e4e881bfa50e2cd74747511a3ad2e8676e0c70..0000000000000000000000000000000000000000 --- a/spaces/Rbrq/DeticChatGPT/tools/remove_lvis_rare.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import argparse -import json - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--ann', default='datasets/lvis/lvis_v1_train.json') - args = parser.parse_args() - - print('Loading', args.ann) - data = json.load(open(args.ann, 'r')) - catid2freq = {x['id']: x['frequency'] for x in data['categories']} - print('ori #anns', len(data['annotations'])) - exclude = ['r'] - data['annotations'] = [x for x in data['annotations'] \ - if catid2freq[x['category_id']] not in exclude] - print('filtered #anns', len(data['annotations'])) - out_path = args.ann[:-5] + '_norare.json' - print('Saving to', out_path) - json.dump(data, open(out_path, 'w')) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/utils/misc.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/utils/misc.py deleted file mode 100644 index 2c58d0d7fee9fe3d4519270ad8c1e998d0d8a18c..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/utils/misc.py +++ /dev/null @@ -1,377 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import collections.abc -import functools -import itertools -import subprocess -import warnings -from collections import abc -from importlib import import_module -from inspect import getfullargspec -from itertools import repeat - - -# From PyTorch internals -def _ntuple(n): - - def parse(x): - if isinstance(x, collections.abc.Iterable): - return x - return tuple(repeat(x, n)) - - return parse - - -to_1tuple = _ntuple(1) -to_2tuple = _ntuple(2) -to_3tuple = _ntuple(3) -to_4tuple = _ntuple(4) -to_ntuple = _ntuple - - -def is_str(x): - """Whether the input is an string instance. - - Note: This method is deprecated since python 2 is no longer supported. - """ - return isinstance(x, str) - - -def import_modules_from_strings(imports, allow_failed_imports=False): - """Import modules from the given list of strings. - - Args: - imports (list | str | None): The given module names to be imported. - allow_failed_imports (bool): If True, the failed imports will return - None. Otherwise, an ImportError is raise. Default: False. - - Returns: - list[module] | module | None: The imported modules. - - Examples: - >>> osp, sys = import_modules_from_strings( - ... ['os.path', 'sys']) - >>> import os.path as osp_ - >>> import sys as sys_ - >>> assert osp == osp_ - >>> assert sys == sys_ - """ - if not imports: - return - single_import = False - if isinstance(imports, str): - single_import = True - imports = [imports] - if not isinstance(imports, list): - raise TypeError( - f'custom_imports must be a list but got type {type(imports)}') - imported = [] - for imp in imports: - if not isinstance(imp, str): - raise TypeError( - f'{imp} is of type {type(imp)} and cannot be imported.') - try: - imported_tmp = import_module(imp) - except ImportError: - if allow_failed_imports: - warnings.warn(f'{imp} failed to import and is ignored.', - UserWarning) - imported_tmp = None - else: - raise ImportError - imported.append(imported_tmp) - if single_import: - imported = imported[0] - return imported - - -def iter_cast(inputs, dst_type, return_type=None): - """Cast elements of an iterable object into some type. - - Args: - inputs (Iterable): The input object. - dst_type (type): Destination type. - return_type (type, optional): If specified, the output object will be - converted to this type, otherwise an iterator. - - Returns: - iterator or specified type: The converted object. - """ - if not isinstance(inputs, abc.Iterable): - raise TypeError('inputs must be an iterable object') - if not isinstance(dst_type, type): - raise TypeError('"dst_type" must be a valid type') - - out_iterable = map(dst_type, inputs) - - if return_type is None: - return out_iterable - else: - return return_type(out_iterable) - - -def list_cast(inputs, dst_type): - """Cast elements of an iterable object into a list of some type. - - A partial method of :func:`iter_cast`. - """ - return iter_cast(inputs, dst_type, return_type=list) - - -def tuple_cast(inputs, dst_type): - """Cast elements of an iterable object into a tuple of some type. - - A partial method of :func:`iter_cast`. - """ - return iter_cast(inputs, dst_type, return_type=tuple) - - -def is_seq_of(seq, expected_type, seq_type=None): - """Check whether it is a sequence of some type. - - Args: - seq (Sequence): The sequence to be checked. - expected_type (type): Expected type of sequence items. - seq_type (type, optional): Expected sequence type. - - Returns: - bool: Whether the sequence is valid. - """ - if seq_type is None: - exp_seq_type = abc.Sequence - else: - assert isinstance(seq_type, type) - exp_seq_type = seq_type - if not isinstance(seq, exp_seq_type): - return False - for item in seq: - if not isinstance(item, expected_type): - return False - return True - - -def is_list_of(seq, expected_type): - """Check whether it is a list of some type. - - A partial method of :func:`is_seq_of`. - """ - return is_seq_of(seq, expected_type, seq_type=list) - - -def is_tuple_of(seq, expected_type): - """Check whether it is a tuple of some type. - - A partial method of :func:`is_seq_of`. - """ - return is_seq_of(seq, expected_type, seq_type=tuple) - - -def slice_list(in_list, lens): - """Slice a list into several sub lists by a list of given length. - - Args: - in_list (list): The list to be sliced. - lens(int or list): The expected length of each out list. - - Returns: - list: A list of sliced list. - """ - if isinstance(lens, int): - assert len(in_list) % lens == 0 - lens = [lens] * int(len(in_list) / lens) - if not isinstance(lens, list): - raise TypeError('"indices" must be an integer or a list of integers') - elif sum(lens) != len(in_list): - raise ValueError('sum of lens and list length does not ' - f'match: {sum(lens)} != {len(in_list)}') - out_list = [] - idx = 0 - for i in range(len(lens)): - out_list.append(in_list[idx:idx + lens[i]]) - idx += lens[i] - return out_list - - -def concat_list(in_list): - """Concatenate a list of list into a single list. - - Args: - in_list (list): The list of list to be merged. - - Returns: - list: The concatenated flat list. - """ - return list(itertools.chain(*in_list)) - - -def check_prerequisites( - prerequisites, - checker, - msg_tmpl='Prerequisites "{}" are required in method "{}" but not ' - 'found, please install them first.'): # yapf: disable - """A decorator factory to check if prerequisites are satisfied. - - Args: - prerequisites (str of list[str]): Prerequisites to be checked. - checker (callable): The checker method that returns True if a - prerequisite is meet, False otherwise. - msg_tmpl (str): The message template with two variables. - - Returns: - decorator: A specific decorator. - """ - - def wrap(func): - - @functools.wraps(func) - def wrapped_func(*args, **kwargs): - requirements = [prerequisites] if isinstance( - prerequisites, str) else prerequisites - missing = [] - for item in requirements: - if not checker(item): - missing.append(item) - if missing: - print(msg_tmpl.format(', '.join(missing), func.__name__)) - raise RuntimeError('Prerequisites not meet.') - else: - return func(*args, **kwargs) - - return wrapped_func - - return wrap - - -def _check_py_package(package): - try: - import_module(package) - except ImportError: - return False - else: - return True - - -def _check_executable(cmd): - if subprocess.call(f'which {cmd}', shell=True) != 0: - return False - else: - return True - - -def requires_package(prerequisites): - """A decorator to check if some python packages are installed. - - Example: - >>> @requires_package('numpy') - >>> func(arg1, args): - >>> return numpy.zeros(1) - array([0.]) - >>> @requires_package(['numpy', 'non_package']) - >>> func(arg1, args): - >>> return numpy.zeros(1) - ImportError - """ - return check_prerequisites(prerequisites, checker=_check_py_package) - - -def requires_executable(prerequisites): - """A decorator to check if some executable files are installed. - - Example: - >>> @requires_executable('ffmpeg') - >>> func(arg1, args): - >>> print(1) - 1 - """ - return check_prerequisites(prerequisites, checker=_check_executable) - - -def deprecated_api_warning(name_dict, cls_name=None): - """A decorator to check if some arguments are deprecate and try to replace - deprecate src_arg_name to dst_arg_name. - - Args: - name_dict(dict): - key (str): Deprecate argument names. - val (str): Expected argument names. - - Returns: - func: New function. - """ - - def api_warning_wrapper(old_func): - - @functools.wraps(old_func) - def new_func(*args, **kwargs): - # get the arg spec of the decorated method - args_info = getfullargspec(old_func) - # get name of the function - func_name = old_func.__name__ - if cls_name is not None: - func_name = f'{cls_name}.{func_name}' - if args: - arg_names = args_info.args[:len(args)] - for src_arg_name, dst_arg_name in name_dict.items(): - if src_arg_name in arg_names: - warnings.warn( - f'"{src_arg_name}" is deprecated in ' - f'`{func_name}`, please use "{dst_arg_name}" ' - 'instead') - arg_names[arg_names.index(src_arg_name)] = dst_arg_name - if kwargs: - for src_arg_name, dst_arg_name in name_dict.items(): - if src_arg_name in kwargs: - - assert dst_arg_name not in kwargs, ( - f'The expected behavior is to replace ' - f'the deprecated key `{src_arg_name}` to ' - f'new key `{dst_arg_name}`, but got them ' - f'in the arguments at the same time, which ' - f'is confusing. `{src_arg_name} will be ' - f'deprecated in the future, please ' - f'use `{dst_arg_name}` instead.') - - warnings.warn( - f'"{src_arg_name}" is deprecated in ' - f'`{func_name}`, please use "{dst_arg_name}" ' - 'instead') - kwargs[dst_arg_name] = kwargs.pop(src_arg_name) - - # apply converted arguments to the decorated method - output = old_func(*args, **kwargs) - return output - - return new_func - - return api_warning_wrapper - - -def is_method_overridden(method, base_class, derived_class): - """Check if a method of base class is overridden in derived class. - - Args: - method (str): the method name to check. - base_class (type): the class of the base class. - derived_class (type | Any): the class or instance of the derived class. - """ - assert isinstance(base_class, type), \ - "base_class doesn't accept instance, Please pass class instead." - - if not isinstance(derived_class, type): - derived_class = derived_class.__class__ - - base_method = getattr(base_class, method) - derived_method = getattr(derived_class, method) - return derived_method != base_method - - -def has_method(obj: object, method: str) -> bool: - """Check whether the object has a method. - - Args: - method (str): The method name to check. - obj (object): The object to check. - - Returns: - bool: True if the object has the method else False. - """ - return hasattr(obj, method) and callable(getattr(obj, method)) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/two_stage.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/two_stage.py deleted file mode 100644 index ba5bdde980dc0cd76375455c9c7ffaae4b25531e..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/two_stage.py +++ /dev/null @@ -1,215 +0,0 @@ -import torch -import torch.nn as nn - -# from mmdet.core import bbox2result, bbox2roi, build_assigner, build_sampler -from ..builder import DETECTORS, build_backbone, build_head, build_neck -from .base import BaseDetector - - -@DETECTORS.register_module() -class TwoStageDetector(BaseDetector): - """Base class for two-stage detectors. - - Two-stage detectors typically consisting of a region proposal network and a - task-specific regression head. - """ - - def __init__(self, - backbone, - neck=None, - rpn_head=None, - roi_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(TwoStageDetector, self).__init__() - self.backbone = build_backbone(backbone) - - if neck is not None: - self.neck = build_neck(neck) - - if rpn_head is not None: - rpn_train_cfg = train_cfg.rpn if train_cfg is not None else None - rpn_head_ = rpn_head.copy() - rpn_head_.update(train_cfg=rpn_train_cfg, test_cfg=test_cfg.rpn) - self.rpn_head = build_head(rpn_head_) - - if roi_head is not None: - # update train and test cfg here for now - # TODO: refactor assigner & sampler - rcnn_train_cfg = train_cfg.rcnn if train_cfg is not None else None - roi_head.update(train_cfg=rcnn_train_cfg) - roi_head.update(test_cfg=test_cfg.rcnn) - self.roi_head = build_head(roi_head) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - self.init_weights(pretrained=pretrained) - - @property - def with_rpn(self): - """bool: whether the detector has RPN""" - return hasattr(self, 'rpn_head') and self.rpn_head is not None - - @property - def with_roi_head(self): - """bool: whether the detector has a RoI head""" - return hasattr(self, 'roi_head') and self.roi_head is not None - - def init_weights(self, pretrained=None): - """Initialize the weights in detector. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - super(TwoStageDetector, self).init_weights(pretrained) - self.backbone.init_weights(pretrained=pretrained) - if self.with_neck: - if isinstance(self.neck, nn.Sequential): - for m in self.neck: - m.init_weights() - else: - self.neck.init_weights() - if self.with_rpn: - self.rpn_head.init_weights() - if self.with_roi_head: - self.roi_head.init_weights(pretrained) - - def extract_feat(self, img): - """Directly extract features from the backbone+neck.""" - x = self.backbone(img) - if self.with_neck: - x = self.neck(x) - return x - - def forward_dummy(self, img): - """Used for computing network flops. - - See `mmdetection/tools/analysis_tools/get_flops.py` - """ - outs = () - # backbone - x = self.extract_feat(img) - # rpn - if self.with_rpn: - rpn_outs = self.rpn_head(x) - outs = outs + (rpn_outs, ) - proposals = torch.randn(1000, 4).to(img.device) - # roi_head - roi_outs = self.roi_head.forward_dummy(x, proposals) - outs = outs + (roi_outs, ) - return outs - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None, - proposals=None, - **kwargs): - """ - Args: - img (Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - - gt_labels (list[Tensor]): class indices corresponding to each box - - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - gt_masks (None | Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - - proposals : override rpn proposals with custom proposals. Use when - `with_rpn` is False. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - x = self.extract_feat(img) - - losses = dict() - - # RPN forward and loss - if self.with_rpn: - proposal_cfg = self.train_cfg.get('rpn_proposal', - self.test_cfg.rpn) - rpn_losses, proposal_list = self.rpn_head.forward_train( - x, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=gt_bboxes_ignore, - proposal_cfg=proposal_cfg) - losses.update(rpn_losses) - else: - proposal_list = proposals - - roi_losses = self.roi_head.forward_train(x, img_metas, proposal_list, - gt_bboxes, gt_labels, - gt_bboxes_ignore, gt_masks, - **kwargs) - losses.update(roi_losses) - - return losses - - async def async_simple_test(self, - img, - img_meta, - proposals=None, - rescale=False): - """Async test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - x = self.extract_feat(img) - - if proposals is None: - proposal_list = await self.rpn_head.async_simple_test_rpn( - x, img_meta) - else: - proposal_list = proposals - - return await self.roi_head.async_simple_test( - x, proposal_list, img_meta, rescale=rescale) - - def simple_test(self, img, img_metas, proposals=None, rescale=False): - """Test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - - x = self.extract_feat(img) - - # get origin input shape to onnx dynamic input shape - if torch.onnx.is_in_onnx_export(): - img_shape = torch._shape_as_tensor(img)[2:] - img_metas[0]['img_shape_for_onnx'] = img_shape - - if proposals is None: - proposal_list = self.rpn_head.simple_test_rpn(x, img_metas) - else: - proposal_list = proposals - - return self.roi_head.simple_test( - x, proposal_list, img_metas, rescale=rescale) - - def aug_test(self, imgs, img_metas, rescale=False): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ - x = self.extract_feats(imgs) - proposal_list = self.rpn_head.aug_test_rpn(x, img_metas) - return self.roi_head.aug_test( - x, proposal_list, img_metas, rescale=rescale) diff --git a/spaces/Rongjiehuang/ProDiff/utils/common_schedulers.py b/spaces/Rongjiehuang/ProDiff/utils/common_schedulers.py deleted file mode 100644 index 41c6f4a9250b2d5954ce93cb7c04e7b55025cb51..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/ProDiff/utils/common_schedulers.py +++ /dev/null @@ -1,50 +0,0 @@ -from utils.hparams import hparams - - -class NoneSchedule(object): - def __init__(self, optimizer): - super().__init__() - self.optimizer = optimizer - self.constant_lr = hparams['lr'] - self.step(0) - - def step(self, num_updates): - self.lr = self.constant_lr - for param_group in self.optimizer.param_groups: - param_group['lr'] = self.lr - return self.lr - - def get_lr(self): - return self.optimizer.param_groups[0]['lr'] - - def get_last_lr(self): - return self.get_lr() - - -class RSQRTSchedule(object): - def __init__(self, optimizer): - super().__init__() - self.optimizer = optimizer - self.constant_lr = hparams['lr'] - self.warmup_updates = hparams['warmup_updates'] - self.hidden_size = hparams['hidden_size'] - self.lr = hparams['lr'] - for param_group in optimizer.param_groups: - param_group['lr'] = self.lr - self.step(0) - - def step(self, num_updates): - constant_lr = self.constant_lr - warmup = min(num_updates / self.warmup_updates, 1.0) - rsqrt_decay = max(self.warmup_updates, num_updates) ** -0.5 - rsqrt_hidden = self.hidden_size ** -0.5 - self.lr = max(constant_lr * warmup * rsqrt_decay * rsqrt_hidden, 1e-7) - for param_group in self.optimizer.param_groups: - param_group['lr'] = self.lr - return self.lr - - def get_lr(self): - return self.optimizer.param_groups[0]['lr'] - - def get_last_lr(self): - return self.get_lr() diff --git a/spaces/Ryukijano/canny_coyo1m/README.md b/spaces/Ryukijano/canny_coyo1m/README.md deleted file mode 100644 index 671348306a897f06d5cef95dc9fa3f13c5b5f697..0000000000000000000000000000000000000000 --- a/spaces/Ryukijano/canny_coyo1m/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Canny Coyo1m -emoji: 🌖 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: jax-diffusers-event/canny_coyo1m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SAAZIZI/SummarizeAV/keyword_retriever/keyword_retreiver.py b/spaces/SAAZIZI/SummarizeAV/keyword_retriever/keyword_retreiver.py deleted file mode 100644 index 21c9a0ce2239f47505122c7321a2ade7fa2a0960..0000000000000000000000000000000000000000 --- a/spaces/SAAZIZI/SummarizeAV/keyword_retriever/keyword_retreiver.py +++ /dev/null @@ -1,66 +0,0 @@ -import json -import os -import time - -import chromadb -from llama_index import (ServiceContext, StorageContext, VectorStoreIndex, ) -from llama_index.embeddings import HuggingFaceEmbedding -from llama_index.schema import Document -from llama_index.vector_stores import ChromaVectorStore - -import config -from logger import logger - - -class MediaRetriever: - def __init__(self, media_id, similarity_top_k=5): - self.media_id = media_id - self.similarity_top_k = similarity_top_k - - self._initialize_retriever() - - def _initialize_retriever(self): - docs = self._load_documents() - - # Create client and a new collection - chroma_client = chromadb.EphemeralClient() - try: - chroma_collection = chroma_client.create_collection(f"quickstart-{time.time()}") - except Exception as e: - logger.error(f"Exception encountered: {e}") - chroma_collection = None - - # Define embedding function - embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-base-en-v1.5") - - # Set up ChromaVectorStore and load in data - if chroma_collection is not None: - vector_store = ChromaVectorStore(chroma_collection=chroma_collection) - else: - logger.error("chroma_collection is not initialized.") # handle this case - - storage_context = StorageContext.from_defaults(vector_store=vector_store) - service_context = ServiceContext.from_defaults(embed_model=embed_model) - - logger.info("Start indexing transcription") - self.index = VectorStoreIndex.from_documents(docs, storage_context=storage_context, - service_context=service_context, show_progress=True) - logger.info("End indexing transcription") - - self.retreiver = self.index.as_retriever(similarity_top_k=self.similarity_top_k) - - def _load_documents(self): - with open(os.path.join(config.output_path_transcription, f"{self.media_id}.json"), "r") as f: - json_data = json.load(f) - - documents = [] - for segment in json_data["segments"]: - text = segment["text"] - start = segment["start"] - metadata = {"start": start} - documents.append(Document(text=text, metadata=metadata)) - return documents - - def search(self, query): - response = self.retreiver.retrieve(query) - return response diff --git a/spaces/Salesforce/BLIP/train_vqa.py b/spaces/Salesforce/BLIP/train_vqa.py deleted file mode 100644 index 89eb7490862e517cc660f842396033c21d441a20..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/BLIP/train_vqa.py +++ /dev/null @@ -1,202 +0,0 @@ -''' - * Copyright (c) 2022, salesforce.com, inc. - * All rights reserved. - * SPDX-License-Identifier: BSD-3-Clause - * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause - * By Junnan Li -''' -import argparse -import os -import ruamel_yaml as yaml -import numpy as np -import random -import time -import datetime -import json -from pathlib import Path - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.utils.data import DataLoader -import torch.backends.cudnn as cudnn -import torch.distributed as dist - -from models.blip_vqa import blip_vqa -import utils -from utils import cosine_lr_schedule -from data import create_dataset, create_sampler, create_loader -from data.vqa_dataset import vqa_collate_fn -from data.utils import save_result - - -def train(model, data_loader, optimizer, epoch, device): - # train - model.train() - - metric_logger = utils.MetricLogger(delimiter=" ") - metric_logger.add_meter('lr', utils.SmoothedValue(window_size=1, fmt='{value:.6f}')) - metric_logger.add_meter('loss', utils.SmoothedValue(window_size=1, fmt='{value:.4f}')) - - header = 'Train Epoch: [{}]'.format(epoch) - print_freq = 50 - - for i,(image, question, answer, weights, n) in enumerate(metric_logger.log_every(data_loader, print_freq, header)): - image, weights = image.to(device,non_blocking=True), weights.to(device,non_blocking=True) - - loss = model(image, question, answer, train=True, n=n, weights=weights) - - optimizer.zero_grad() - loss.backward() - optimizer.step() - - metric_logger.update(loss=loss.item()) - metric_logger.update(lr=optimizer.param_groups[0]["lr"]) - - # gather the stats from all processes - metric_logger.synchronize_between_processes() - print("Averaged stats:", metric_logger.global_avg()) - return {k: "{:.3f}".format(meter.global_avg) for k, meter in metric_logger.meters.items()} - - -@torch.no_grad() -def evaluation(model, data_loader, device, config) : - # test - model.eval() - - metric_logger = utils.MetricLogger(delimiter=" ") - header = 'Generate VQA test result:' - print_freq = 50 - - result = [] - - if config['inference']=='rank': - answer_list = data_loader.dataset.answer_list - answer_candidates = model.tokenizer(answer_list, padding='longest', return_tensors='pt').to(device) - answer_candidates.input_ids[:,0] = model.tokenizer.bos_token_id - - for n, (image, question, question_id) in enumerate(metric_logger.log_every(data_loader, print_freq, header)): - image = image.to(device,non_blocking=True) - - if config['inference']=='generate': - answers = model(image, question, train=False, inference='generate') - - for answer, ques_id in zip(answers, question_id): - ques_id = int(ques_id.item()) - result.append({"question_id":ques_id, "answer":answer}) - - elif config['inference']=='rank': - answer_ids = model(image, question, answer_candidates, train=False, inference='rank', k_test=config['k_test']) - - for ques_id, answer_id in zip(question_id, answer_ids): - result.append({"question_id":int(ques_id.item()), "answer":answer_list[answer_id]}) - - return result - - -def main(args, config): - utils.init_distributed_mode(args) - - device = torch.device(args.device) - - # fix the seed for reproducibility - seed = args.seed + utils.get_rank() - torch.manual_seed(seed) - np.random.seed(seed) - random.seed(seed) - cudnn.benchmark = True - - #### Dataset #### - print("Creating vqa datasets") - datasets = create_dataset('vqa', config) - - if args.distributed: - num_tasks = utils.get_world_size() - global_rank = utils.get_rank() - samplers = create_sampler(datasets, [True, False], num_tasks, global_rank) - else: - samplers = [None, None] - - train_loader, test_loader = create_loader(datasets,samplers, - batch_size=[config['batch_size_train'],config['batch_size_test']], - num_workers=[4,4],is_trains=[True, False], - collate_fns=[vqa_collate_fn,None]) - #### Model #### - print("Creating model") - model = blip_vqa(pretrained=config['pretrained'], image_size=config['image_size'], - vit=config['vit'], vit_grad_ckpt=config['vit_grad_ckpt'], vit_ckpt_layer=config['vit_ckpt_layer']) - - model = model.to(device) - - model_without_ddp = model - if args.distributed: - model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu]) - model_without_ddp = model.module - - optimizer = torch.optim.AdamW(params=model.parameters(), lr=config['init_lr'], weight_decay=config['weight_decay']) - - best = 0 - best_epoch = 0 - - print("Start training") - start_time = time.time() - for epoch in range(0, config['max_epoch']): - if not args.evaluate: - if args.distributed: - train_loader.sampler.set_epoch(epoch) - - cosine_lr_schedule(optimizer, epoch, config['max_epoch'], config['init_lr'], config['min_lr']) - - train_stats = train(model, train_loader, optimizer, epoch, device) - - else: - break - - if utils.is_main_process(): - log_stats = {**{f'train_{k}': v for k, v in train_stats.items()}, - 'epoch': epoch, - } - with open(os.path.join(args.output_dir, "log.txt"),"a") as f: - f.write(json.dumps(log_stats) + "\n") - - save_obj = { - 'model': model_without_ddp.state_dict(), - 'optimizer': optimizer.state_dict(), - 'config': config, - 'epoch': epoch, - } - torch.save(save_obj, os.path.join(args.output_dir, 'checkpoint_%02d.pth'%epoch)) - - dist.barrier() - - vqa_result = evaluation(model_without_ddp, test_loader, device, config) - result_file = save_result(vqa_result, args.result_dir, 'vqa_result') - - total_time = time.time() - start_time - total_time_str = str(datetime.timedelta(seconds=int(total_time))) - print('Training time {}'.format(total_time_str)) - - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--config', default='./configs/vqa.yaml') - parser.add_argument('--output_dir', default='output/VQA') - parser.add_argument('--evaluate', action='store_true') - parser.add_argument('--device', default='cuda') - parser.add_argument('--seed', default=42, type=int) - parser.add_argument('--world_size', default=1, type=int, help='number of distributed processes') - parser.add_argument('--dist_url', default='env://', help='url used to set up distributed training') - parser.add_argument('--distributed', default=True, type=bool) - args = parser.parse_args() - - config = yaml.load(open(args.config, 'r'), Loader=yaml.Loader) - - args.result_dir = os.path.join(args.output_dir, 'result') - - Path(args.output_dir).mkdir(parents=True, exist_ok=True) - Path(args.result_dir).mkdir(parents=True, exist_ok=True) - - yaml.dump(config, open(os.path.join(args.output_dir, 'config.yaml'), 'w')) - - main(args, config) \ No newline at end of file diff --git a/spaces/SamerKharboush/chatGPT-Sam-Turbo/run_macOS.command b/spaces/SamerKharboush/chatGPT-Sam-Turbo/run_macOS.command deleted file mode 100644 index 62af07283093d8e580763d7acfe493c3d88e7b08..0000000000000000000000000000000000000000 --- a/spaces/SamerKharboush/chatGPT-Sam-Turbo/run_macOS.command +++ /dev/null @@ -1,25 +0,0 @@ -#!/bin/bash - -# 获取脚本所在目录 -script_dir=$(dirname "$0") - -# 将工作目录更改为脚本所在目录 -cd "$script_dir" - -# 检查Git仓库是否有更新 -git remote update -pwd - -if ! git status -uno | grep 'up to date' > /dev/null; then - # 如果有更新,关闭当前运行的服务器 - pkill -f ChuanhuChatbot.py - - # 拉取最新更改 - git pull - - # 安装依赖 - pip3 install -r requirements.txt - - # 重新启动服务器 - nohup python3 ChuanhuChatbot.py & -fi diff --git a/spaces/Senpaisora6/dreambooth-training/convertosd.py b/spaces/Senpaisora6/dreambooth-training/convertosd.py deleted file mode 100644 index b242edb1de11ad551b3c7ad98f5689fef2c3321a..0000000000000000000000000000000000000000 --- a/spaces/Senpaisora6/dreambooth-training/convertosd.py +++ /dev/null @@ -1,223 +0,0 @@ -# Script for converting a HF Diffusers saved pipeline to a Stable Diffusion checkpoint. -# *Only* converts the UNet, VAE, and Text Encoder. -# Does not convert optimizer state or any other thing. -# Written by jachiam - -import argparse -import os.path as osp - -import torch - - -# =================# -# UNet Conversion # -# =================# - -unet_conversion_map = [ - # (stable-diffusion, HF Diffusers) - ("time_embed.0.weight", "time_embedding.linear_1.weight"), - ("time_embed.0.bias", "time_embedding.linear_1.bias"), - ("time_embed.2.weight", "time_embedding.linear_2.weight"), - ("time_embed.2.bias", "time_embedding.linear_2.bias"), - ("input_blocks.0.0.weight", "conv_in.weight"), - ("input_blocks.0.0.bias", "conv_in.bias"), - ("out.0.weight", "conv_norm_out.weight"), - ("out.0.bias", "conv_norm_out.bias"), - ("out.2.weight", "conv_out.weight"), - ("out.2.bias", "conv_out.bias"), -] - -unet_conversion_map_resnet = [ - # (stable-diffusion, HF Diffusers) - ("in_layers.0", "norm1"), - ("in_layers.2", "conv1"), - ("out_layers.0", "norm2"), - ("out_layers.3", "conv2"), - ("emb_layers.1", "time_emb_proj"), - ("skip_connection", "conv_shortcut"), -] - -unet_conversion_map_layer = [] -# hardcoded number of downblocks and resnets/attentions... -# would need smarter logic for other networks. -for i in range(4): - # loop over downblocks/upblocks - - for j in range(2): - # loop over resnets/attentions for downblocks - hf_down_res_prefix = f"down_blocks.{i}.resnets.{j}." - sd_down_res_prefix = f"input_blocks.{3*i + j + 1}.0." - unet_conversion_map_layer.append((sd_down_res_prefix, hf_down_res_prefix)) - - if i < 3: - # no attention layers in down_blocks.3 - hf_down_atn_prefix = f"down_blocks.{i}.attentions.{j}." - sd_down_atn_prefix = f"input_blocks.{3*i + j + 1}.1." - unet_conversion_map_layer.append((sd_down_atn_prefix, hf_down_atn_prefix)) - - for j in range(3): - # loop over resnets/attentions for upblocks - hf_up_res_prefix = f"up_blocks.{i}.resnets.{j}." - sd_up_res_prefix = f"output_blocks.{3*i + j}.0." - unet_conversion_map_layer.append((sd_up_res_prefix, hf_up_res_prefix)) - - if i > 0: - # no attention layers in up_blocks.0 - hf_up_atn_prefix = f"up_blocks.{i}.attentions.{j}." - sd_up_atn_prefix = f"output_blocks.{3*i + j}.1." - unet_conversion_map_layer.append((sd_up_atn_prefix, hf_up_atn_prefix)) - - if i < 3: - # no downsample in down_blocks.3 - hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0.conv." - sd_downsample_prefix = f"input_blocks.{3*(i+1)}.0.op." - unet_conversion_map_layer.append((sd_downsample_prefix, hf_downsample_prefix)) - - # no upsample in up_blocks.3 - hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0." - sd_upsample_prefix = f"output_blocks.{3*i + 2}.{1 if i == 0 else 2}." - unet_conversion_map_layer.append((sd_upsample_prefix, hf_upsample_prefix)) - -hf_mid_atn_prefix = "mid_block.attentions.0." -sd_mid_atn_prefix = "middle_block.1." -unet_conversion_map_layer.append((sd_mid_atn_prefix, hf_mid_atn_prefix)) - -for j in range(2): - hf_mid_res_prefix = f"mid_block.resnets.{j}." - sd_mid_res_prefix = f"middle_block.{2*j}." - unet_conversion_map_layer.append((sd_mid_res_prefix, hf_mid_res_prefix)) - - -def convert_unet_state_dict(unet_state_dict): - # buyer beware: this is a *brittle* function, - # and correct output requires that all of these pieces interact in - # the exact order in which I have arranged them. - mapping = {k: k for k in unet_state_dict.keys()} - for sd_name, hf_name in unet_conversion_map: - mapping[hf_name] = sd_name - for k, v in mapping.items(): - if "resnets" in k: - for sd_part, hf_part in unet_conversion_map_resnet: - v = v.replace(hf_part, sd_part) - mapping[k] = v - for k, v in mapping.items(): - for sd_part, hf_part in unet_conversion_map_layer: - v = v.replace(hf_part, sd_part) - mapping[k] = v - new_state_dict = {v: unet_state_dict[k] for k, v in mapping.items()} - return new_state_dict - - -# ================# -# VAE Conversion # -# ================# - -vae_conversion_map = [ - # (stable-diffusion, HF Diffusers) - ("nin_shortcut", "conv_shortcut"), - ("norm_out", "conv_norm_out"), - ("mid.attn_1.", "mid_block.attentions.0."), -] - -for i in range(4): - # down_blocks have two resnets - for j in range(2): - hf_down_prefix = f"encoder.down_blocks.{i}.resnets.{j}." - sd_down_prefix = f"encoder.down.{i}.block.{j}." - vae_conversion_map.append((sd_down_prefix, hf_down_prefix)) - - if i < 3: - hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0." - sd_downsample_prefix = f"down.{i}.downsample." - vae_conversion_map.append((sd_downsample_prefix, hf_downsample_prefix)) - - hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0." - sd_upsample_prefix = f"up.{3-i}.upsample." - vae_conversion_map.append((sd_upsample_prefix, hf_upsample_prefix)) - - # up_blocks have three resnets - # also, up blocks in hf are numbered in reverse from sd - for j in range(3): - hf_up_prefix = f"decoder.up_blocks.{i}.resnets.{j}." - sd_up_prefix = f"decoder.up.{3-i}.block.{j}." - vae_conversion_map.append((sd_up_prefix, hf_up_prefix)) - -# this part accounts for mid blocks in both the encoder and the decoder -for i in range(2): - hf_mid_res_prefix = f"mid_block.resnets.{i}." - sd_mid_res_prefix = f"mid.block_{i+1}." - vae_conversion_map.append((sd_mid_res_prefix, hf_mid_res_prefix)) - - -vae_conversion_map_attn = [ - # (stable-diffusion, HF Diffusers) - ("norm.", "group_norm."), - ("q.", "query."), - ("k.", "key."), - ("v.", "value."), - ("proj_out.", "proj_attn."), -] - - -def reshape_weight_for_sd(w): - # convert HF linear weights to SD conv2d weights - return w.reshape(*w.shape, 1, 1) - - -def convert_vae_state_dict(vae_state_dict): - mapping = {k: k for k in vae_state_dict.keys()} - for k, v in mapping.items(): - for sd_part, hf_part in vae_conversion_map: - v = v.replace(hf_part, sd_part) - mapping[k] = v - for k, v in mapping.items(): - if "attentions" in k: - for sd_part, hf_part in vae_conversion_map_attn: - v = v.replace(hf_part, sd_part) - mapping[k] = v - new_state_dict = {v: vae_state_dict[k] for k, v in mapping.items()} - weights_to_convert = ["q", "k", "v", "proj_out"] - print("Converting to CKPT ...") - for k, v in new_state_dict.items(): - for weight_name in weights_to_convert: - if f"mid.attn_1.{weight_name}.weight" in k: - new_state_dict[k] = reshape_weight_for_sd(v) - return new_state_dict - - -# =========================# -# Text Encoder Conversion # -# =========================# -# pretty much a no-op - - -def convert_text_enc_state_dict(text_enc_dict): - return text_enc_dict - - -def convert(model_path, checkpoint_path): - unet_path = osp.join(model_path, "unet", "diffusion_pytorch_model.bin") - vae_path = osp.join(model_path, "vae", "diffusion_pytorch_model.bin") - text_enc_path = osp.join(model_path, "text_encoder", "pytorch_model.bin") - - # Convert the UNet model - unet_state_dict = torch.load(unet_path, map_location='cpu') - unet_state_dict = convert_unet_state_dict(unet_state_dict) - unet_state_dict = {"model.diffusion_model." + k: v for k, v in unet_state_dict.items()} - - # Convert the VAE model - vae_state_dict = torch.load(vae_path, map_location='cpu') - vae_state_dict = convert_vae_state_dict(vae_state_dict) - vae_state_dict = {"first_stage_model." + k: v for k, v in vae_state_dict.items()} - - # Convert the text encoder model - text_enc_dict = torch.load(text_enc_path, map_location='cpu') - text_enc_dict = convert_text_enc_state_dict(text_enc_dict) - text_enc_dict = {"cond_stage_model.transformer." + k: v for k, v in text_enc_dict.items()} - - # Put together new checkpoint - state_dict = {**unet_state_dict, **vae_state_dict, **text_enc_dict} - - state_dict = {k:v.half() for k,v in state_dict.items()} - state_dict = {"state_dict": state_dict} - torch.save(state_dict, checkpoint_path) diff --git a/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/util/utils.py b/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/util/utils.py deleted file mode 100644 index e9f0318e306fa04bff0ada70486b41aaa69b07c8..0000000000000000000000000000000000000000 --- a/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/util/utils.py +++ /dev/null @@ -1,608 +0,0 @@ -import argparse -import json -import warnings -from collections import OrderedDict -from copy import deepcopy -from typing import Any, Dict, List - -import numpy as np -import torch -from transformers import AutoTokenizer - -from groundingdino.util.slconfig import SLConfig - - -def slprint(x, name="x"): - if isinstance(x, (torch.Tensor, np.ndarray)): - print(f"{name}.shape:", x.shape) - elif isinstance(x, (tuple, list)): - print("type x:", type(x)) - for i in range(min(10, len(x))): - slprint(x[i], f"{name}[{i}]") - elif isinstance(x, dict): - for k, v in x.items(): - slprint(v, f"{name}[{k}]") - else: - print(f"{name}.type:", type(x)) - - -def clean_state_dict(state_dict): - new_state_dict = OrderedDict() - for k, v in state_dict.items(): - if k[:7] == "module.": - k = k[7:] # remove `module.` - new_state_dict[k] = v - return new_state_dict - - -def renorm( - img: torch.FloatTensor, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] -) -> torch.FloatTensor: - # img: tensor(3,H,W) or tensor(B,3,H,W) - # return: same as img - assert img.dim() == 3 or img.dim() == 4, "img.dim() should be 3 or 4 but %d" % img.dim() - if img.dim() == 3: - assert img.size(0) == 3, 'img.size(0) shoule be 3 but "%d". (%s)' % ( - img.size(0), - str(img.size()), - ) - img_perm = img.permute(1, 2, 0) - mean = torch.Tensor(mean) - std = torch.Tensor(std) - img_res = img_perm * std + mean - return img_res.permute(2, 0, 1) - else: # img.dim() == 4 - assert img.size(1) == 3, 'img.size(1) shoule be 3 but "%d". (%s)' % ( - img.size(1), - str(img.size()), - ) - img_perm = img.permute(0, 2, 3, 1) - mean = torch.Tensor(mean) - std = torch.Tensor(std) - img_res = img_perm * std + mean - return img_res.permute(0, 3, 1, 2) - - -class CocoClassMapper: - def __init__(self) -> None: - self.category_map_str = { - "1": 1, - "2": 2, - "3": 3, - "4": 4, - "5": 5, - "6": 6, - "7": 7, - "8": 8, - "9": 9, - "10": 10, - "11": 11, - "13": 12, - "14": 13, - "15": 14, - "16": 15, - "17": 16, - "18": 17, - "19": 18, - "20": 19, - "21": 20, - "22": 21, - "23": 22, - "24": 23, - "25": 24, - "27": 25, - "28": 26, - "31": 27, - "32": 28, - "33": 29, - "34": 30, - "35": 31, - "36": 32, - "37": 33, - "38": 34, - "39": 35, - "40": 36, - "41": 37, - "42": 38, - "43": 39, - "44": 40, - "46": 41, - "47": 42, - "48": 43, - "49": 44, - "50": 45, - "51": 46, - "52": 47, - "53": 48, - "54": 49, - "55": 50, - "56": 51, - "57": 52, - "58": 53, - "59": 54, - "60": 55, - "61": 56, - "62": 57, - "63": 58, - "64": 59, - "65": 60, - "67": 61, - "70": 62, - "72": 63, - "73": 64, - "74": 65, - "75": 66, - "76": 67, - "77": 68, - "78": 69, - "79": 70, - "80": 71, - "81": 72, - "82": 73, - "84": 74, - "85": 75, - "86": 76, - "87": 77, - "88": 78, - "89": 79, - "90": 80, - } - self.origin2compact_mapper = {int(k): v - 1 for k, v in self.category_map_str.items()} - self.compact2origin_mapper = {int(v - 1): int(k) for k, v in self.category_map_str.items()} - - def origin2compact(self, idx): - return self.origin2compact_mapper[int(idx)] - - def compact2origin(self, idx): - return self.compact2origin_mapper[int(idx)] - - -def to_device(item, device): - if isinstance(item, torch.Tensor): - return item.to(device) - elif isinstance(item, list): - return [to_device(i, device) for i in item] - elif isinstance(item, dict): - return {k: to_device(v, device) for k, v in item.items()} - else: - raise NotImplementedError( - "Call Shilong if you use other containers! type: {}".format(type(item)) - ) - - -# -def get_gaussian_mean(x, axis, other_axis, softmax=True): - """ - - Args: - x (float): Input images(BxCxHxW) - axis (int): The index for weighted mean - other_axis (int): The other index - - Returns: weighted index for axis, BxC - - """ - mat2line = torch.sum(x, axis=other_axis) - # mat2line = mat2line / mat2line.mean() * 10 - if softmax: - u = torch.softmax(mat2line, axis=2) - else: - u = mat2line / (mat2line.sum(2, keepdim=True) + 1e-6) - size = x.shape[axis] - ind = torch.linspace(0, 1, size).to(x.device) - batch = x.shape[0] - channel = x.shape[1] - index = ind.repeat([batch, channel, 1]) - mean_position = torch.sum(index * u, dim=2) - return mean_position - - -def get_expected_points_from_map(hm, softmax=True): - """get_gaussian_map_from_points - B,C,H,W -> B,N,2 float(0, 1) float(0, 1) - softargmax function - - Args: - hm (float): Input images(BxCxHxW) - - Returns: - weighted index for axis, BxCx2. float between 0 and 1. - - """ - # hm = 10*hm - B, C, H, W = hm.shape - y_mean = get_gaussian_mean(hm, 2, 3, softmax=softmax) # B,C - x_mean = get_gaussian_mean(hm, 3, 2, softmax=softmax) # B,C - # return torch.cat((x_mean.unsqueeze(-1), y_mean.unsqueeze(-1)), 2) - return torch.stack([x_mean, y_mean], dim=2) - - -# Positional encoding (section 5.1) -# borrow from nerf -class Embedder: - def __init__(self, **kwargs): - self.kwargs = kwargs - self.create_embedding_fn() - - def create_embedding_fn(self): - embed_fns = [] - d = self.kwargs["input_dims"] - out_dim = 0 - if self.kwargs["include_input"]: - embed_fns.append(lambda x: x) - out_dim += d - - max_freq = self.kwargs["max_freq_log2"] - N_freqs = self.kwargs["num_freqs"] - - if self.kwargs["log_sampling"]: - freq_bands = 2.0 ** torch.linspace(0.0, max_freq, steps=N_freqs) - else: - freq_bands = torch.linspace(2.0**0.0, 2.0**max_freq, steps=N_freqs) - - for freq in freq_bands: - for p_fn in self.kwargs["periodic_fns"]: - embed_fns.append(lambda x, p_fn=p_fn, freq=freq: p_fn(x * freq)) - out_dim += d - - self.embed_fns = embed_fns - self.out_dim = out_dim - - def embed(self, inputs): - return torch.cat([fn(inputs) for fn in self.embed_fns], -1) - - -def get_embedder(multires, i=0): - import torch.nn as nn - - if i == -1: - return nn.Identity(), 3 - - embed_kwargs = { - "include_input": True, - "input_dims": 3, - "max_freq_log2": multires - 1, - "num_freqs": multires, - "log_sampling": True, - "periodic_fns": [torch.sin, torch.cos], - } - - embedder_obj = Embedder(**embed_kwargs) - embed = lambda x, eo=embedder_obj: eo.embed(x) - return embed, embedder_obj.out_dim - - -class APOPMeter: - def __init__(self) -> None: - self.tp = 0 - self.fp = 0 - self.tn = 0 - self.fn = 0 - - def update(self, pred, gt): - """ - Input: - pred, gt: Tensor() - """ - assert pred.shape == gt.shape - self.tp += torch.logical_and(pred == 1, gt == 1).sum().item() - self.fp += torch.logical_and(pred == 1, gt == 0).sum().item() - self.tn += torch.logical_and(pred == 0, gt == 0).sum().item() - self.tn += torch.logical_and(pred == 1, gt == 0).sum().item() - - def update_cm(self, tp, fp, tn, fn): - self.tp += tp - self.fp += fp - self.tn += tn - self.tn += fn - - -def inverse_sigmoid(x, eps=1e-5): - x = x.clamp(min=0, max=1) - x1 = x.clamp(min=eps) - x2 = (1 - x).clamp(min=eps) - return torch.log(x1 / x2) - - -def get_raw_dict(args): - """ - return the dicf contained in args. - - e.g: - >>> with open(path, 'w') as f: - json.dump(get_raw_dict(args), f, indent=2) - """ - if isinstance(args, argparse.Namespace): - return vars(args) - elif isinstance(args, dict): - return args - elif isinstance(args, SLConfig): - return args._cfg_dict - else: - raise NotImplementedError("Unknown type {}".format(type(args))) - - -def stat_tensors(tensor): - assert tensor.dim() == 1 - tensor_sm = tensor.softmax(0) - entropy = (tensor_sm * torch.log(tensor_sm + 1e-9)).sum() - - return { - "max": tensor.max(), - "min": tensor.min(), - "mean": tensor.mean(), - "var": tensor.var(), - "std": tensor.var() ** 0.5, - "entropy": entropy, - } - - -class NiceRepr: - """Inherit from this class and define ``__nice__`` to "nicely" print your - objects. - - Defines ``__str__`` and ``__repr__`` in terms of ``__nice__`` function - Classes that inherit from :class:`NiceRepr` should redefine ``__nice__``. - If the inheriting class has a ``__len__``, method then the default - ``__nice__`` method will return its length. - - Example: - >>> class Foo(NiceRepr): - ... def __nice__(self): - ... return 'info' - >>> foo = Foo() - >>> assert str(foo) == '' - >>> assert repr(foo).startswith('>> class Bar(NiceRepr): - ... pass - >>> bar = Bar() - >>> import pytest - >>> with pytest.warns(None) as record: - >>> assert 'object at' in str(bar) - >>> assert 'object at' in repr(bar) - - Example: - >>> class Baz(NiceRepr): - ... def __len__(self): - ... return 5 - >>> baz = Baz() - >>> assert str(baz) == '' - """ - - def __nice__(self): - """str: a "nice" summary string describing this module""" - if hasattr(self, "__len__"): - # It is a common pattern for objects to use __len__ in __nice__ - # As a convenience we define a default __nice__ for these objects - return str(len(self)) - else: - # In all other cases force the subclass to overload __nice__ - raise NotImplementedError(f"Define the __nice__ method for {self.__class__!r}") - - def __repr__(self): - """str: the string of the module""" - try: - nice = self.__nice__() - classname = self.__class__.__name__ - return f"<{classname}({nice}) at {hex(id(self))}>" - except NotImplementedError as ex: - warnings.warn(str(ex), category=RuntimeWarning) - return object.__repr__(self) - - def __str__(self): - """str: the string of the module""" - try: - classname = self.__class__.__name__ - nice = self.__nice__() - return f"<{classname}({nice})>" - except NotImplementedError as ex: - warnings.warn(str(ex), category=RuntimeWarning) - return object.__repr__(self) - - -def ensure_rng(rng=None): - """Coerces input into a random number generator. - - If the input is None, then a global random state is returned. - - If the input is a numeric value, then that is used as a seed to construct a - random state. Otherwise the input is returned as-is. - - Adapted from [1]_. - - Args: - rng (int | numpy.random.RandomState | None): - if None, then defaults to the global rng. Otherwise this can be an - integer or a RandomState class - Returns: - (numpy.random.RandomState) : rng - - a numpy random number generator - - References: - .. [1] https://gitlab.kitware.com/computer-vision/kwarray/blob/master/kwarray/util_random.py#L270 # noqa: E501 - """ - - if rng is None: - rng = np.random.mtrand._rand - elif isinstance(rng, int): - rng = np.random.RandomState(rng) - else: - rng = rng - return rng - - -def random_boxes(num=1, scale=1, rng=None): - """Simple version of ``kwimage.Boxes.random`` - - Returns: - Tensor: shape (n, 4) in x1, y1, x2, y2 format. - - References: - https://gitlab.kitware.com/computer-vision/kwimage/blob/master/kwimage/structs/boxes.py#L1390 - - Example: - >>> num = 3 - >>> scale = 512 - >>> rng = 0 - >>> boxes = random_boxes(num, scale, rng) - >>> print(boxes) - tensor([[280.9925, 278.9802, 308.6148, 366.1769], - [216.9113, 330.6978, 224.0446, 456.5878], - [405.3632, 196.3221, 493.3953, 270.7942]]) - """ - rng = ensure_rng(rng) - - tlbr = rng.rand(num, 4).astype(np.float32) - - tl_x = np.minimum(tlbr[:, 0], tlbr[:, 2]) - tl_y = np.minimum(tlbr[:, 1], tlbr[:, 3]) - br_x = np.maximum(tlbr[:, 0], tlbr[:, 2]) - br_y = np.maximum(tlbr[:, 1], tlbr[:, 3]) - - tlbr[:, 0] = tl_x * scale - tlbr[:, 1] = tl_y * scale - tlbr[:, 2] = br_x * scale - tlbr[:, 3] = br_y * scale - - boxes = torch.from_numpy(tlbr) - return boxes - - -class ModelEma(torch.nn.Module): - def __init__(self, model, decay=0.9997, device=None): - super(ModelEma, self).__init__() - # make a copy of the model for accumulating moving average of weights - self.module = deepcopy(model) - self.module.eval() - - # import ipdb; ipdb.set_trace() - - self.decay = decay - self.device = device # perform ema on different device from model if set - if self.device is not None: - self.module.to(device=device) - - def _update(self, model, update_fn): - with torch.no_grad(): - for ema_v, model_v in zip( - self.module.state_dict().values(), model.state_dict().values() - ): - if self.device is not None: - model_v = model_v.to(device=self.device) - ema_v.copy_(update_fn(ema_v, model_v)) - - def update(self, model): - self._update(model, update_fn=lambda e, m: self.decay * e + (1.0 - self.decay) * m) - - def set(self, model): - self._update(model, update_fn=lambda e, m: m) - - -class BestMetricSingle: - def __init__(self, init_res=0.0, better="large") -> None: - self.init_res = init_res - self.best_res = init_res - self.best_ep = -1 - - self.better = better - assert better in ["large", "small"] - - def isbetter(self, new_res, old_res): - if self.better == "large": - return new_res > old_res - if self.better == "small": - return new_res < old_res - - def update(self, new_res, ep): - if self.isbetter(new_res, self.best_res): - self.best_res = new_res - self.best_ep = ep - return True - return False - - def __str__(self) -> str: - return "best_res: {}\t best_ep: {}".format(self.best_res, self.best_ep) - - def __repr__(self) -> str: - return self.__str__() - - def summary(self) -> dict: - return { - "best_res": self.best_res, - "best_ep": self.best_ep, - } - - -class BestMetricHolder: - def __init__(self, init_res=0.0, better="large", use_ema=False) -> None: - self.best_all = BestMetricSingle(init_res, better) - self.use_ema = use_ema - if use_ema: - self.best_ema = BestMetricSingle(init_res, better) - self.best_regular = BestMetricSingle(init_res, better) - - def update(self, new_res, epoch, is_ema=False): - """ - return if the results is the best. - """ - if not self.use_ema: - return self.best_all.update(new_res, epoch) - else: - if is_ema: - self.best_ema.update(new_res, epoch) - return self.best_all.update(new_res, epoch) - else: - self.best_regular.update(new_res, epoch) - return self.best_all.update(new_res, epoch) - - def summary(self): - if not self.use_ema: - return self.best_all.summary() - - res = {} - res.update({f"all_{k}": v for k, v in self.best_all.summary().items()}) - res.update({f"regular_{k}": v for k, v in self.best_regular.summary().items()}) - res.update({f"ema_{k}": v for k, v in self.best_ema.summary().items()}) - return res - - def __repr__(self) -> str: - return json.dumps(self.summary(), indent=2) - - def __str__(self) -> str: - return self.__repr__() - - -def targets_to(targets: List[Dict[str, Any]], device): - """Moves the target dicts to the given device.""" - excluded_keys = [ - "questionId", - "tokens_positive", - "strings_positive", - "tokens", - "dataset_name", - "sentence_id", - "original_img_id", - "nb_eval", - "task_id", - "original_id", - "token_span", - "caption", - "dataset_type", - ] - return [ - {k: v.to(device) if k not in excluded_keys else v for k, v in t.items()} for t in targets - ] - - -def get_phrases_from_posmap( - posmap: torch.BoolTensor, tokenized: Dict, tokenizer: AutoTokenizer -): - assert isinstance(posmap, torch.Tensor), "posmap must be torch.Tensor" - if posmap.dim() == 1: - non_zero_idx = posmap.nonzero(as_tuple=True)[0].tolist() - token_ids = [tokenized["input_ids"][i] for i in non_zero_idx] - return tokenizer.decode(token_ids) - else: - raise NotImplementedError("posmap must be 1-dim") diff --git a/spaces/Silentlin/DiffSinger/usr/diffsinger_task.py b/spaces/Silentlin/DiffSinger/usr/diffsinger_task.py deleted file mode 100644 index ab586e19aa63e603f63f6be9948f314b0b80689e..0000000000000000000000000000000000000000 --- a/spaces/Silentlin/DiffSinger/usr/diffsinger_task.py +++ /dev/null @@ -1,490 +0,0 @@ -import torch - -import utils -from utils.hparams import hparams -from .diff.net import DiffNet -from .diff.shallow_diffusion_tts import GaussianDiffusion, OfflineGaussianDiffusion -from .diffspeech_task import DiffSpeechTask -from vocoders.base_vocoder import get_vocoder_cls, BaseVocoder -from modules.fastspeech.pe import PitchExtractor -from modules.fastspeech.fs2 import FastSpeech2 -from modules.diffsinger_midi.fs2 import FastSpeech2MIDI -from modules.fastspeech.tts_modules import mel2ph_to_dur - -from usr.diff.candidate_decoder import FFT -from utils.pitch_utils import denorm_f0 -from tasks.tts.fs2_utils import FastSpeechDataset -from tasks.tts.fs2 import FastSpeech2Task - -import numpy as np -import os -import torch.nn.functional as F - -DIFF_DECODERS = { - 'wavenet': lambda hp: DiffNet(hp['audio_num_mel_bins']), - 'fft': lambda hp: FFT( - hp['hidden_size'], hp['dec_layers'], hp['dec_ffn_kernel_size'], hp['num_heads']), -} - - -class DiffSingerTask(DiffSpeechTask): - def __init__(self): - super(DiffSingerTask, self).__init__() - self.dataset_cls = FastSpeechDataset - self.vocoder: BaseVocoder = get_vocoder_cls(hparams)() - if hparams.get('pe_enable') is not None and hparams['pe_enable']: - self.pe = PitchExtractor().cuda() - utils.load_ckpt(self.pe, hparams['pe_ckpt'], 'model', strict=True) - self.pe.eval() - - def build_tts_model(self): - # import torch - # from tqdm import tqdm - # v_min = torch.ones([80]) * 100 - # v_max = torch.ones([80]) * -100 - # for i, ds in enumerate(tqdm(self.dataset_cls('train'))): - # v_max = torch.max(torch.max(ds['mel'].reshape(-1, 80), 0)[0], v_max) - # v_min = torch.min(torch.min(ds['mel'].reshape(-1, 80), 0)[0], v_min) - # if i % 100 == 0: - # print(i, v_min, v_max) - # print('final', v_min, v_max) - mel_bins = hparams['audio_num_mel_bins'] - self.model = GaussianDiffusion( - phone_encoder=self.phone_encoder, - out_dims=mel_bins, denoise_fn=DIFF_DECODERS[hparams['diff_decoder_type']](hparams), - timesteps=hparams['timesteps'], - K_step=hparams['K_step'], - loss_type=hparams['diff_loss_type'], - spec_min=hparams['spec_min'], spec_max=hparams['spec_max'], - ) - if hparams['fs2_ckpt'] != '': - utils.load_ckpt(self.model.fs2, hparams['fs2_ckpt'], 'model', strict=True) - # self.model.fs2.decoder = None - for k, v in self.model.fs2.named_parameters(): - v.requires_grad = False - - def validation_step(self, sample, batch_idx): - outputs = {} - txt_tokens = sample['txt_tokens'] # [B, T_t] - - target = sample['mels'] # [B, T_s, 80] - energy = sample['energy'] - # fs2_mel = sample['fs2_mels'] - spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids') - mel2ph = sample['mel2ph'] - f0 = sample['f0'] - uv = sample['uv'] - - outputs['losses'] = {} - - outputs['losses'], model_out = self.run_model(self.model, sample, return_output=True, infer=False) - - - outputs['total_loss'] = sum(outputs['losses'].values()) - outputs['nsamples'] = sample['nsamples'] - outputs = utils.tensors_to_scalars(outputs) - if batch_idx < hparams['num_valid_plots']: - model_out = self.model( - txt_tokens, spk_embed=spk_embed, mel2ph=mel2ph, f0=f0, uv=uv, energy=energy, ref_mels=None, infer=True) - - if hparams.get('pe_enable') is not None and hparams['pe_enable']: - gt_f0 = self.pe(sample['mels'])['f0_denorm_pred'] # pe predict from GT mel - pred_f0 = self.pe(model_out['mel_out'])['f0_denorm_pred'] # pe predict from Pred mel - else: - gt_f0 = denorm_f0(sample['f0'], sample['uv'], hparams) - pred_f0 = model_out.get('f0_denorm') - self.plot_wav(batch_idx, sample['mels'], model_out['mel_out'], is_mel=True, gt_f0=gt_f0, f0=pred_f0) - self.plot_mel(batch_idx, sample['mels'], model_out['mel_out'], name=f'diffmel_{batch_idx}') - self.plot_mel(batch_idx, sample['mels'], model_out['fs2_mel'], name=f'fs2mel_{batch_idx}') - return outputs - - -class ShallowDiffusionOfflineDataset(FastSpeechDataset): - def __getitem__(self, index): - sample = super(ShallowDiffusionOfflineDataset, self).__getitem__(index) - item = self._get_item(index) - - if self.prefix != 'train' and hparams['fs2_ckpt'] != '': - fs2_ckpt = os.path.dirname(hparams['fs2_ckpt']) - item_name = item['item_name'] - fs2_mel = torch.Tensor(np.load(f'{fs2_ckpt}/P_mels_npy/{item_name}.npy')) # ~M generated by FFT-singer. - sample['fs2_mel'] = fs2_mel - return sample - - def collater(self, samples): - batch = super(ShallowDiffusionOfflineDataset, self).collater(samples) - if self.prefix != 'train' and hparams['fs2_ckpt'] != '': - batch['fs2_mels'] = utils.collate_2d([s['fs2_mel'] for s in samples], 0.0) - return batch - - -class DiffSingerOfflineTask(DiffSingerTask): - def __init__(self): - super(DiffSingerOfflineTask, self).__init__() - self.dataset_cls = ShallowDiffusionOfflineDataset - - def build_tts_model(self): - mel_bins = hparams['audio_num_mel_bins'] - self.model = OfflineGaussianDiffusion( - phone_encoder=self.phone_encoder, - out_dims=mel_bins, denoise_fn=DIFF_DECODERS[hparams['diff_decoder_type']](hparams), - timesteps=hparams['timesteps'], - K_step=hparams['K_step'], - loss_type=hparams['diff_loss_type'], - spec_min=hparams['spec_min'], spec_max=hparams['spec_max'], - ) - # if hparams['fs2_ckpt'] != '': - # utils.load_ckpt(self.model.fs2, hparams['fs2_ckpt'], 'model', strict=True) - # self.model.fs2.decoder = None - - def run_model(self, model, sample, return_output=False, infer=False): - txt_tokens = sample['txt_tokens'] # [B, T_t] - target = sample['mels'] # [B, T_s, 80] - mel2ph = sample['mel2ph'] # [B, T_s] - f0 = sample['f0'] - uv = sample['uv'] - energy = sample['energy'] - fs2_mel = None #sample['fs2_mels'] - spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids') - if hparams['pitch_type'] == 'cwt': - cwt_spec = sample[f'cwt_spec'] - f0_mean = sample['f0_mean'] - f0_std = sample['f0_std'] - sample['f0_cwt'] = f0 = model.cwt2f0_norm(cwt_spec, f0_mean, f0_std, mel2ph) - - output = model(txt_tokens, mel2ph=mel2ph, spk_embed=spk_embed, - ref_mels=[target, fs2_mel], f0=f0, uv=uv, energy=energy, infer=infer) - - losses = {} - if 'diff_loss' in output: - losses['mel'] = output['diff_loss'] - # self.add_dur_loss(output['dur'], mel2ph, txt_tokens, losses=losses) - # if hparams['use_pitch_embed']: - # self.add_pitch_loss(output, sample, losses) - if hparams['use_energy_embed']: - self.add_energy_loss(output['energy_pred'], energy, losses) - - if not return_output: - return losses - else: - return losses, output - - def validation_step(self, sample, batch_idx): - outputs = {} - txt_tokens = sample['txt_tokens'] # [B, T_t] - - target = sample['mels'] # [B, T_s, 80] - energy = sample['energy'] - # fs2_mel = sample['fs2_mels'] - spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids') - mel2ph = sample['mel2ph'] - f0 = sample['f0'] - uv = sample['uv'] - - outputs['losses'] = {} - - outputs['losses'], model_out = self.run_model(self.model, sample, return_output=True, infer=False) - - - outputs['total_loss'] = sum(outputs['losses'].values()) - outputs['nsamples'] = sample['nsamples'] - outputs = utils.tensors_to_scalars(outputs) - if batch_idx < hparams['num_valid_plots']: - fs2_mel = sample['fs2_mels'] - model_out = self.model( - txt_tokens, spk_embed=spk_embed, mel2ph=mel2ph, f0=f0, uv=uv, energy=energy, - ref_mels=[None, fs2_mel], infer=True) - if hparams.get('pe_enable') is not None and hparams['pe_enable']: - gt_f0 = self.pe(sample['mels'])['f0_denorm_pred'] # pe predict from GT mel - pred_f0 = self.pe(model_out['mel_out'])['f0_denorm_pred'] # pe predict from Pred mel - else: - gt_f0 = denorm_f0(sample['f0'], sample['uv'], hparams) - pred_f0 = model_out.get('f0_denorm') - self.plot_wav(batch_idx, sample['mels'], model_out['mel_out'], is_mel=True, gt_f0=gt_f0, f0=pred_f0) - self.plot_mel(batch_idx, sample['mels'], model_out['mel_out'], name=f'diffmel_{batch_idx}') - self.plot_mel(batch_idx, sample['mels'], fs2_mel, name=f'fs2mel_{batch_idx}') - return outputs - - def test_step(self, sample, batch_idx): - spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids') - txt_tokens = sample['txt_tokens'] - energy = sample['energy'] - if hparams['profile_infer']: - pass - else: - mel2ph, uv, f0 = None, None, None - if hparams['use_gt_dur']: - mel2ph = sample['mel2ph'] - if hparams['use_gt_f0']: - f0 = sample['f0'] - uv = sample['uv'] - fs2_mel = sample['fs2_mels'] - outputs = self.model( - txt_tokens, spk_embed=spk_embed, mel2ph=mel2ph, f0=f0, uv=uv, ref_mels=[None, fs2_mel], energy=energy, - infer=True) - sample['outputs'] = self.model.out2mel(outputs['mel_out']) - sample['mel2ph_pred'] = outputs['mel2ph'] - - if hparams.get('pe_enable') is not None and hparams['pe_enable']: - sample['f0'] = self.pe(sample['mels'])['f0_denorm_pred'] # pe predict from GT mel - sample['f0_pred'] = self.pe(sample['outputs'])['f0_denorm_pred'] # pe predict from Pred mel - else: - sample['f0'] = denorm_f0(sample['f0'], sample['uv'], hparams) - sample['f0_pred'] = outputs.get('f0_denorm') - return self.after_infer(sample) - - -class MIDIDataset(FastSpeechDataset): - def __getitem__(self, index): - sample = super(MIDIDataset, self).__getitem__(index) - item = self._get_item(index) - sample['f0_midi'] = torch.FloatTensor(item['f0_midi']) - sample['pitch_midi'] = torch.LongTensor(item['pitch_midi'])[:hparams['max_frames']] - - return sample - - def collater(self, samples): - batch = super(MIDIDataset, self).collater(samples) - batch['f0_midi'] = utils.collate_1d([s['f0_midi'] for s in samples], 0.0) - batch['pitch_midi'] = utils.collate_1d([s['pitch_midi'] for s in samples], 0) - # print((batch['pitch_midi'] == f0_to_coarse(batch['f0_midi'])).all()) - return batch - - -class OpencpopDataset(FastSpeechDataset): - def __getitem__(self, index): - sample = super(OpencpopDataset, self).__getitem__(index) - item = self._get_item(index) - sample['pitch_midi'] = torch.LongTensor(item['pitch_midi'])[:hparams['max_frames']] - sample['midi_dur'] = torch.FloatTensor(item['midi_dur'])[:hparams['max_frames']] - sample['is_slur'] = torch.LongTensor(item['is_slur'])[:hparams['max_frames']] - sample['word_boundary'] = torch.LongTensor(item['word_boundary'])[:hparams['max_frames']] - return sample - - def collater(self, samples): - batch = super(OpencpopDataset, self).collater(samples) - batch['pitch_midi'] = utils.collate_1d([s['pitch_midi'] for s in samples], 0) - batch['midi_dur'] = utils.collate_1d([s['midi_dur'] for s in samples], 0) - batch['is_slur'] = utils.collate_1d([s['is_slur'] for s in samples], 0) - batch['word_boundary'] = utils.collate_1d([s['word_boundary'] for s in samples], 0) - return batch - - -class DiffSingerMIDITask(DiffSingerTask): - def __init__(self): - super(DiffSingerMIDITask, self).__init__() - # self.dataset_cls = MIDIDataset - self.dataset_cls = OpencpopDataset - - def run_model(self, model, sample, return_output=False, infer=False): - txt_tokens = sample['txt_tokens'] # [B, T_t] - target = sample['mels'] # [B, T_s, 80] - # mel2ph = sample['mel2ph'] if hparams['use_gt_dur'] else None # [B, T_s] - mel2ph = sample['mel2ph'] - if hparams.get('switch_midi2f0_step') is not None and self.global_step > hparams['switch_midi2f0_step']: - f0 = None - uv = None - else: - f0 = sample['f0'] - uv = sample['uv'] - energy = sample['energy'] - - spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids') - if hparams['pitch_type'] == 'cwt': - cwt_spec = sample[f'cwt_spec'] - f0_mean = sample['f0_mean'] - f0_std = sample['f0_std'] - sample['f0_cwt'] = f0 = model.cwt2f0_norm(cwt_spec, f0_mean, f0_std, mel2ph) - - output = model(txt_tokens, mel2ph=mel2ph, spk_embed=spk_embed, - ref_mels=target, f0=f0, uv=uv, energy=energy, infer=infer, pitch_midi=sample['pitch_midi'], - midi_dur=sample.get('midi_dur'), is_slur=sample.get('is_slur')) - - losses = {} - if 'diff_loss' in output: - losses['mel'] = output['diff_loss'] - self.add_dur_loss(output['dur'], mel2ph, txt_tokens, sample['word_boundary'], losses=losses) - if hparams['use_pitch_embed']: - self.add_pitch_loss(output, sample, losses) - if hparams['use_energy_embed']: - self.add_energy_loss(output['energy_pred'], energy, losses) - if not return_output: - return losses - else: - return losses, output - - def validation_step(self, sample, batch_idx): - outputs = {} - txt_tokens = sample['txt_tokens'] # [B, T_t] - - target = sample['mels'] # [B, T_s, 80] - energy = sample['energy'] - # fs2_mel = sample['fs2_mels'] - spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids') - mel2ph = sample['mel2ph'] - - outputs['losses'] = {} - - outputs['losses'], model_out = self.run_model(self.model, sample, return_output=True, infer=False) - - outputs['total_loss'] = sum(outputs['losses'].values()) - outputs['nsamples'] = sample['nsamples'] - outputs = utils.tensors_to_scalars(outputs) - if batch_idx < hparams['num_valid_plots']: - model_out = self.model( - txt_tokens, spk_embed=spk_embed, mel2ph=mel2ph, f0=None, uv=None, energy=energy, ref_mels=None, infer=True, - pitch_midi=sample['pitch_midi'], midi_dur=sample.get('midi_dur'), is_slur=sample.get('is_slur')) - - if hparams.get('pe_enable') is not None and hparams['pe_enable']: - gt_f0 = self.pe(sample['mels'])['f0_denorm_pred'] # pe predict from GT mel - pred_f0 = self.pe(model_out['mel_out'])['f0_denorm_pred'] # pe predict from Pred mel - else: - gt_f0 = denorm_f0(sample['f0'], sample['uv'], hparams) - pred_f0 = model_out.get('f0_denorm') - self.plot_wav(batch_idx, sample['mels'], model_out['mel_out'], is_mel=True, gt_f0=gt_f0, f0=pred_f0) - self.plot_mel(batch_idx, sample['mels'], model_out['mel_out'], name=f'diffmel_{batch_idx}') - self.plot_mel(batch_idx, sample['mels'], model_out['fs2_mel'], name=f'fs2mel_{batch_idx}') - if hparams['use_pitch_embed']: - self.plot_pitch(batch_idx, sample, model_out) - return outputs - - def add_dur_loss(self, dur_pred, mel2ph, txt_tokens, wdb, losses=None): - """ - :param dur_pred: [B, T], float, log scale - :param mel2ph: [B, T] - :param txt_tokens: [B, T] - :param losses: - :return: - """ - B, T = txt_tokens.shape - nonpadding = (txt_tokens != 0).float() - dur_gt = mel2ph_to_dur(mel2ph, T).float() * nonpadding - is_sil = torch.zeros_like(txt_tokens).bool() - for p in self.sil_ph: - is_sil = is_sil | (txt_tokens == self.phone_encoder.encode(p)[0]) - is_sil = is_sil.float() # [B, T_txt] - - # phone duration loss - if hparams['dur_loss'] == 'mse': - losses['pdur'] = F.mse_loss(dur_pred, (dur_gt + 1).log(), reduction='none') - losses['pdur'] = (losses['pdur'] * nonpadding).sum() / nonpadding.sum() - dur_pred = (dur_pred.exp() - 1).clamp(min=0) - else: - raise NotImplementedError - - # use linear scale for sent and word duration - if hparams['lambda_word_dur'] > 0: - idx = F.pad(wdb.cumsum(axis=1), (1, 0))[:, :-1] - # word_dur_g = dur_gt.new_zeros([B, idx.max() + 1]).scatter_(1, idx, midi_dur) # midi_dur can be implied by add gt-ph_dur - word_dur_p = dur_pred.new_zeros([B, idx.max() + 1]).scatter_add(1, idx, dur_pred) - word_dur_g = dur_gt.new_zeros([B, idx.max() + 1]).scatter_add(1, idx, dur_gt) - wdur_loss = F.mse_loss((word_dur_p + 1).log(), (word_dur_g + 1).log(), reduction='none') - word_nonpadding = (word_dur_g > 0).float() - wdur_loss = (wdur_loss * word_nonpadding).sum() / word_nonpadding.sum() - losses['wdur'] = wdur_loss * hparams['lambda_word_dur'] - if hparams['lambda_sent_dur'] > 0: - sent_dur_p = dur_pred.sum(-1) - sent_dur_g = dur_gt.sum(-1) - sdur_loss = F.mse_loss((sent_dur_p + 1).log(), (sent_dur_g + 1).log(), reduction='mean') - losses['sdur'] = sdur_loss.mean() * hparams['lambda_sent_dur'] - - -class AuxDecoderMIDITask(FastSpeech2Task): - def __init__(self): - super().__init__() - # self.dataset_cls = MIDIDataset - self.dataset_cls = OpencpopDataset - - def build_tts_model(self): - if hparams.get('use_midi') is not None and hparams['use_midi']: - self.model = FastSpeech2MIDI(self.phone_encoder) - else: - self.model = FastSpeech2(self.phone_encoder) - - def run_model(self, model, sample, return_output=False): - txt_tokens = sample['txt_tokens'] # [B, T_t] - target = sample['mels'] # [B, T_s, 80] - mel2ph = sample['mel2ph'] # [B, T_s] - f0 = sample['f0'] - uv = sample['uv'] - energy = sample['energy'] - - spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids') - if hparams['pitch_type'] == 'cwt': - cwt_spec = sample[f'cwt_spec'] - f0_mean = sample['f0_mean'] - f0_std = sample['f0_std'] - sample['f0_cwt'] = f0 = model.cwt2f0_norm(cwt_spec, f0_mean, f0_std, mel2ph) - - output = model(txt_tokens, mel2ph=mel2ph, spk_embed=spk_embed, - ref_mels=target, f0=f0, uv=uv, energy=energy, infer=False, pitch_midi=sample['pitch_midi'], - midi_dur=sample.get('midi_dur'), is_slur=sample.get('is_slur')) - - losses = {} - self.add_mel_loss(output['mel_out'], target, losses) - self.add_dur_loss(output['dur'], mel2ph, txt_tokens, sample['word_boundary'], losses=losses) - if hparams['use_pitch_embed']: - self.add_pitch_loss(output, sample, losses) - if hparams['use_energy_embed']: - self.add_energy_loss(output['energy_pred'], energy, losses) - if not return_output: - return losses - else: - return losses, output - - def add_dur_loss(self, dur_pred, mel2ph, txt_tokens, wdb, losses=None): - """ - :param dur_pred: [B, T], float, log scale - :param mel2ph: [B, T] - :param txt_tokens: [B, T] - :param losses: - :return: - """ - B, T = txt_tokens.shape - nonpadding = (txt_tokens != 0).float() - dur_gt = mel2ph_to_dur(mel2ph, T).float() * nonpadding - is_sil = torch.zeros_like(txt_tokens).bool() - for p in self.sil_ph: - is_sil = is_sil | (txt_tokens == self.phone_encoder.encode(p)[0]) - is_sil = is_sil.float() # [B, T_txt] - - # phone duration loss - if hparams['dur_loss'] == 'mse': - losses['pdur'] = F.mse_loss(dur_pred, (dur_gt + 1).log(), reduction='none') - losses['pdur'] = (losses['pdur'] * nonpadding).sum() / nonpadding.sum() - dur_pred = (dur_pred.exp() - 1).clamp(min=0) - else: - raise NotImplementedError - - # use linear scale for sent and word duration - if hparams['lambda_word_dur'] > 0: - idx = F.pad(wdb.cumsum(axis=1), (1, 0))[:, :-1] - # word_dur_g = dur_gt.new_zeros([B, idx.max() + 1]).scatter_(1, idx, midi_dur) # midi_dur can be implied by add gt-ph_dur - word_dur_p = dur_pred.new_zeros([B, idx.max() + 1]).scatter_add(1, idx, dur_pred) - word_dur_g = dur_gt.new_zeros([B, idx.max() + 1]).scatter_add(1, idx, dur_gt) - wdur_loss = F.mse_loss((word_dur_p + 1).log(), (word_dur_g + 1).log(), reduction='none') - word_nonpadding = (word_dur_g > 0).float() - wdur_loss = (wdur_loss * word_nonpadding).sum() / word_nonpadding.sum() - losses['wdur'] = wdur_loss * hparams['lambda_word_dur'] - if hparams['lambda_sent_dur'] > 0: - sent_dur_p = dur_pred.sum(-1) - sent_dur_g = dur_gt.sum(-1) - sdur_loss = F.mse_loss((sent_dur_p + 1).log(), (sent_dur_g + 1).log(), reduction='mean') - losses['sdur'] = sdur_loss.mean() * hparams['lambda_sent_dur'] - - def validation_step(self, sample, batch_idx): - outputs = {} - outputs['losses'] = {} - outputs['losses'], model_out = self.run_model(self.model, sample, return_output=True) - outputs['total_loss'] = sum(outputs['losses'].values()) - outputs['nsamples'] = sample['nsamples'] - mel_out = self.model.out2mel(model_out['mel_out']) - outputs = utils.tensors_to_scalars(outputs) - # if sample['mels'].shape[0] == 1: - # self.add_laplace_var(mel_out, sample['mels'], outputs) - if batch_idx < hparams['num_valid_plots']: - self.plot_mel(batch_idx, sample['mels'], mel_out) - self.plot_dur(batch_idx, sample, model_out) - if hparams['use_pitch_embed']: - self.plot_pitch(batch_idx, sample, model_out) - return outputs \ No newline at end of file diff --git a/spaces/SpacesExamples/fastapi_t5/static/index.html b/spaces/SpacesExamples/fastapi_t5/static/index.html deleted file mode 100644 index 7e2ccc20465e2ed59250df44c42c4e18c9ccaa97..0000000000000000000000000000000000000000 --- a/spaces/SpacesExamples/fastapi_t5/static/index.html +++ /dev/null @@ -1,36 +0,0 @@ - - - - - - Fast API 🤗 Space served with Uvicorn - - - - -
    -
    -

    Text generation using Flan T5

    -

    - Model: - google/flan-t5-small -

    -
    - - - -

    -
    -
    -
    - - \ No newline at end of file diff --git a/spaces/StealYourGhost/Joeythemonster-anything-midjourney-v-4-1/app.py b/spaces/StealYourGhost/Joeythemonster-anything-midjourney-v-4-1/app.py deleted file mode 100644 index 262436d8b50f87b0953c645576cc3184b3b27b43..0000000000000000000000000000000000000000 --- a/spaces/StealYourGhost/Joeythemonster-anything-midjourney-v-4-1/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Joeythemonster/anything-midjourney-v-4-1").launch() \ No newline at end of file diff --git a/spaces/Superlang/ImageProcessor/annotator/leres/leres/net_tools.py b/spaces/Superlang/ImageProcessor/annotator/leres/leres/net_tools.py deleted file mode 100644 index 745ba5a0ef19adb869525e6b252db86780b8126e..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/leres/leres/net_tools.py +++ /dev/null @@ -1,54 +0,0 @@ -import importlib -import torch -import os -from collections import OrderedDict - - -def get_func(func_name): - """Helper to return a function object by name. func_name must identify a - function in this module or the path to a function relative to the base - 'modeling' module. - """ - if func_name == '': - return None - try: - parts = func_name.split('.') - # Refers to a function in this module - if len(parts) == 1: - return globals()[parts[0]] - # Otherwise, assume we're referencing a module under modeling - module_name = 'annotator.leres.leres.' + '.'.join(parts[:-1]) - module = importlib.import_module(module_name) - return getattr(module, parts[-1]) - except Exception: - print('Failed to f1ind function: %s', func_name) - raise - -def load_ckpt(args, depth_model, shift_model, focal_model): - """ - Load checkpoint. - """ - if os.path.isfile(args.load_ckpt): - print("loading checkpoint %s" % args.load_ckpt) - checkpoint = torch.load(args.load_ckpt) - if shift_model is not None: - shift_model.load_state_dict(strip_prefix_if_present(checkpoint['shift_model'], 'module.'), - strict=True) - if focal_model is not None: - focal_model.load_state_dict(strip_prefix_if_present(checkpoint['focal_model'], 'module.'), - strict=True) - depth_model.load_state_dict(strip_prefix_if_present(checkpoint['depth_model'], "module."), - strict=True) - del checkpoint - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - -def strip_prefix_if_present(state_dict, prefix): - keys = sorted(state_dict.keys()) - if not all(key.startswith(prefix) for key in keys): - return state_dict - stripped_state_dict = OrderedDict() - for key, value in state_dict.items(): - stripped_state_dict[key.replace(prefix, "")] = value - return stripped_state_dict \ No newline at end of file diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/builder.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/builder.py deleted file mode 100644 index 1f5b971252bfc971c3ffbaa27746d69b1d3ea9fd..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/builder.py +++ /dev/null @@ -1,46 +0,0 @@ -import warnings - -from annotator.uniformer.mmcv.cnn import MODELS as MMCV_MODELS -from annotator.uniformer.mmcv.utils import Registry - -MODELS = Registry('models', parent=MMCV_MODELS) - -BACKBONES = MODELS -NECKS = MODELS -HEADS = MODELS -LOSSES = MODELS -SEGMENTORS = MODELS - - -def build_backbone(cfg): - """Build backbone.""" - return BACKBONES.build(cfg) - - -def build_neck(cfg): - """Build neck.""" - return NECKS.build(cfg) - - -def build_head(cfg): - """Build head.""" - return HEADS.build(cfg) - - -def build_loss(cfg): - """Build loss.""" - return LOSSES.build(cfg) - - -def build_segmentor(cfg, train_cfg=None, test_cfg=None): - """Build segmentor.""" - if train_cfg is not None or test_cfg is not None: - warnings.warn( - 'train_cfg and test_cfg is deprecated, ' - 'please specify them in model', UserWarning) - assert cfg.get('train_cfg') is None or train_cfg is None, \ - 'train_cfg specified in both outer field and model field ' - assert cfg.get('test_cfg') is None or test_cfg is None, \ - 'test_cfg specified in both outer field and model field ' - return SEGMENTORS.build( - cfg, default_args=dict(train_cfg=train_cfg, test_cfg=test_cfg)) diff --git a/spaces/TRI-ML/risk_biased_prediction/scripts/scripts_utils/train_main.py b/spaces/TRI-ML/risk_biased_prediction/scripts/scripts_utils/train_main.py deleted file mode 100644 index 31a9e948dad8e384edcf158ac1c6c8f5a62a3464..0000000000000000000000000000000000000000 --- a/spaces/TRI-ML/risk_biased_prediction/scripts/scripts_utils/train_main.py +++ /dev/null @@ -1,144 +0,0 @@ -import os -import shutil - -from mmcv import Config -from pytorch_lightning import Trainer -from pytorch_lightning.callbacks import ModelCheckpoint, EarlyStopping -from pytorch_lightning.loggers import WandbLogger -from pytorch_lightning.utilities.seed import seed_everything - -import wandb - -from risk_biased.utils.callbacks import SwitchTrainingModeCallback -from risk_biased.utils.callbacks import ( - HistogramCallback, - PlotTrajCallback, - DrawCallbackParams, -) -from risk_biased.utils.load_model import load_from_config -from scripts.scripts_utils.load_utils import get_config - - -def create_log_dir(): - working_dir = os.path.dirname(os.path.realpath(__file__)) - log_dir = os.path.join(working_dir, "logs") - if not os.path.exists(log_dir): - os.mkdir(log_dir) - return log_dir - - -def save_log_config(cfg: Config, predictor): - # Save and log the config (not only a copy of the config file because settings may have been overwritten by argparse) - log_config_path = os.path.join(wandb.run.dir, "learning_config.py") - cfg.dump(log_config_path) - wandb.save(log_config_path) - # Save files listed in the current wandb log dir - for file_name in cfg.files_to_log: - dest_path = os.path.join(wandb.run.dir, os.path.basename(file_name)) - shutil.copy(file_name, dest_path) - wandb.save(dest_path) - - if cfg.log_weights_and_grads: - wandb.watch(predictor, log="all", log_freq=100) - - -def create_callbacks(cfg: Config, log_dir: str, is_interaction: bool) -> list: - # Save checkpoint of last model in a specific directory - last_run_checkpoint_callback = ModelCheckpoint( - monitor="val/minfde/prior", - mode="min", - filename="epoch={epoch:02d}-step={step}-val_minfde_prior={val/minfde/prior:.2f}", - auto_insert_metric_name=False, - dirpath=os.path.join(log_dir, "checkpoints_last_run"), - save_last=True, - ) - - # Save checkpoints of current run in current wandb log dir - checkpoint_callback = ModelCheckpoint( - monitor="val/minfde/prior", - mode="min", - filename="epoch={epoch:02d}-step={step}-val_minfde_prior={val/minfde/prior:.2f}", - auto_insert_metric_name=False, - dirpath=wandb.run.dir, - save_last=True, - ) - callbacks = [ - last_run_checkpoint_callback, - checkpoint_callback, - ] - - if not is_interaction: - histogram_callback = HistogramCallback( - params=DrawCallbackParams.from_config(cfg), - n_samples=1000, - ) - - plot_callback = PlotTrajCallback( - params=DrawCallbackParams.from_config(cfg), n_samples=10 - ) - callbacks.append(histogram_callback) - callbacks.append(plot_callback) - - if cfg.early_stopping: - early_stopping_callback = EarlyStopping( - monitor="val/minfde/prior", - min_delta=-0.2, - patience=5, - verbose=False, - mode="min", - ) - callbacks.append(early_stopping_callback) - - switch_mode_callback = SwitchTrainingModeCallback( - switch_at_epoch=cfg.num_epochs_cvae - ) - callbacks.append(switch_mode_callback) - - return callbacks - - -def get_trainer(cfg: Config, logger: WandbLogger, callbacks: list) -> Trainer: - - num_epochs = cfg.num_epochs_cvae + cfg.num_epochs_bias - - return Trainer( - gpus=cfg.gpus, - max_epochs=num_epochs, - logger=logger, - val_check_interval=float(cfg.val_check_interval_epoch), - accumulate_grad_batches=cfg.accumulate_grad_batches, - callbacks=callbacks, - ) - - -def main(is_interaction: bool = False): - - log_dir = create_log_dir() - cfg = get_config(log_dir, is_interaction) - - predictor, dataloaders, cfg = load_from_config(cfg) - - if cfg.seed is not None: - seed_everything(cfg.seed) - - save_log_config(cfg, predictor) - - logger = WandbLogger( - project=cfg.project, log_model=True, save_dir=log_dir, id=wandb.run.id - ) - - callbacks = create_callbacks(cfg, log_dir, is_interaction) - - trainer = get_trainer(cfg, logger, callbacks) - - trainer.fit( - predictor, - train_dataloaders=dataloaders.train_dataloader(), - val_dataloaders=dataloaders.val_dataloader(), - ) - - wandb.finish() - - -if __name__ == "__main__": - main(is_interaction=True) diff --git a/spaces/Tanapol/object_detection/app.py b/spaces/Tanapol/object_detection/app.py deleted file mode 100644 index 906e8dfef028a21b5d4611e2f245536d8bb77200..0000000000000000000000000000000000000000 --- a/spaces/Tanapol/object_detection/app.py +++ /dev/null @@ -1,29 +0,0 @@ -import streamlit as st -from transformers import pipeline -import matplotlib.pyplot as plt -import requests -from PIL import Image - -obj_model = pipeline("object-detection", model="facebook/detr-resnet-50") - -def get_img_from_url(url): - return Image.open(requests.get(url, stream=True).raw) - -def main(): - st.title("Object Detection") - - with st.form("text_field"): - url = st.text_input("Enter an image URL", "https://images.unsplash.com/photo-1543852786-1cf6624b9987?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=987&q=80") - st.write("*We recommend to use Unsplash website for browsing an image,[click here!](https://unsplash.com/)") - img = get_img_from_url(url) - # clicked==True only when the button is clicked - clicked = st.form_submit_button("Submit") - if clicked: - results = obj_model(img) - st.write("**——————————————————————**") - st.write("**Input image:**") - st.image(img) - st.json(results) - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/TandCAcceptMe/face-swap-docker/clip/clip.py b/spaces/TandCAcceptMe/face-swap-docker/clip/clip.py deleted file mode 100644 index f7a5da5e69e0a3b41383734711ccfff1923a9ef9..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/clip/clip.py +++ /dev/null @@ -1,245 +0,0 @@ -import hashlib -import os -import urllib -import warnings -from typing import Any, Union, List -from pkg_resources import packaging - -import torch -from PIL import Image -from torchvision.transforms import Compose, Resize, CenterCrop, ToTensor, Normalize -from tqdm import tqdm - -from .model import build_model -from .simple_tokenizer import SimpleTokenizer as _Tokenizer - -try: - from torchvision.transforms import InterpolationMode - BICUBIC = InterpolationMode.BICUBIC -except ImportError: - BICUBIC = Image.BICUBIC - - -if packaging.version.parse(torch.__version__) < packaging.version.parse("1.7.1"): - warnings.warn("PyTorch version 1.7.1 or higher is recommended") - - -__all__ = ["available_models", "load", "tokenize"] -_tokenizer = _Tokenizer() - -_MODELS = { - "RN50": "https://openaipublic.azureedge.net/clip/models/afeb0e10f9e5a86da6080e35cf09123aca3b358a0c3e3b6c78a7b63bc04b6762/RN50.pt", - "RN101": "https://openaipublic.azureedge.net/clip/models/8fa8567bab74a42d41c5915025a8e4538c3bdbe8804a470a72f30b0d94fab599/RN101.pt", - "RN50x4": "https://openaipublic.azureedge.net/clip/models/7e526bd135e493cef0776de27d5f42653e6b4c8bf9e0f653bb11773263205fdd/RN50x4.pt", - "RN50x16": "https://openaipublic.azureedge.net/clip/models/52378b407f34354e150460fe41077663dd5b39c54cd0bfd2b27167a4a06ec9aa/RN50x16.pt", - "RN50x64": "https://openaipublic.azureedge.net/clip/models/be1cfb55d75a9666199fb2206c106743da0f6468c9d327f3e0d0a543a9919d9c/RN50x64.pt", - "ViT-B/32": "https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt", - "ViT-B/16": "https://openaipublic.azureedge.net/clip/models/5806e77cd80f8b59890b7e101eabd078d9fb84e6937f9e85e4ecb61988df416f/ViT-B-16.pt", - "ViT-L/14": "https://openaipublic.azureedge.net/clip/models/b8cca3fd41ae0c99ba7e8951adf17d267cdb84cd88be6f7c2e0eca1737a03836/ViT-L-14.pt", - "ViT-L/14@336px": "https://openaipublic.azureedge.net/clip/models/3035c92b350959924f9f00213499208652fc7ea050643e8b385c2dac08641f02/ViT-L-14-336px.pt", -} - - -def _download(url: str, root: str): - os.makedirs(root, exist_ok=True) - filename = os.path.basename(url) - - expected_sha256 = url.split("/")[-2] - download_target = os.path.join(root, filename) - - if os.path.exists(download_target) and not os.path.isfile(download_target): - raise RuntimeError(f"{download_target} exists and is not a regular file") - - if os.path.isfile(download_target): - if hashlib.sha256(open(download_target, "rb").read()).hexdigest() == expected_sha256: - return download_target - else: - warnings.warn(f"{download_target} exists, but the SHA256 checksum does not match; re-downloading the file") - - with urllib.request.urlopen(url) as source, open(download_target, "wb") as output: - with tqdm(total=int(source.info().get("Content-Length")), ncols=80, unit='iB', unit_scale=True, unit_divisor=1024) as loop: - while True: - buffer = source.read(8192) - if not buffer: - break - - output.write(buffer) - loop.update(len(buffer)) - - if hashlib.sha256(open(download_target, "rb").read()).hexdigest() != expected_sha256: - raise RuntimeError("Model has been downloaded but the SHA256 checksum does not not match") - - return download_target - - -def _convert_image_to_rgb(image): - return image.convert("RGB") - - -def _transform(n_px): - return Compose([ - Resize(n_px, interpolation=BICUBIC), - CenterCrop(n_px), - _convert_image_to_rgb, - ToTensor(), - Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)), - ]) - - -def available_models() -> List[str]: - """Returns the names of available CLIP models""" - return list(_MODELS.keys()) - - -def load(name: str, device: Union[str, torch.device] = "cuda" if torch.cuda.is_available() else "cpu", jit: bool = False, download_root: str = None): - """Load a CLIP model - - Parameters - ---------- - name : str - A model name listed by `clip.available_models()`, or the path to a model checkpoint containing the state_dict - - device : Union[str, torch.device] - The device to put the loaded model - - jit : bool - Whether to load the optimized JIT model or more hackable non-JIT model (default). - - download_root: str - path to download the model files; by default, it uses "~/.cache/clip" - - Returns - ------- - model : torch.nn.Module - The CLIP model - - preprocess : Callable[[PIL.Image], torch.Tensor] - A torchvision transform that converts a PIL image into a tensor that the returned model can take as its input - """ - if name in _MODELS: - model_path = _download(_MODELS[name], download_root or os.path.expanduser("~/.cache/clip")) - elif os.path.isfile(name): - model_path = name - else: - raise RuntimeError(f"Model {name} not found; available models = {available_models()}") - - with open(model_path, 'rb') as opened_file: - try: - # loading JIT archive - model = torch.jit.load(opened_file, map_location=device if jit else "cpu").eval() - state_dict = None - except RuntimeError: - # loading saved state dict - if jit: - warnings.warn(f"File {model_path} is not a JIT archive. Loading as a state dict instead") - jit = False - state_dict = torch.load(opened_file, map_location="cpu") - - if not jit: - model = build_model(state_dict or model.state_dict()).to(device) - if str(device) == "cpu": - model.float() - return model, _transform(model.visual.input_resolution) - - # patch the device names - device_holder = torch.jit.trace(lambda: torch.ones([]).to(torch.device(device)), example_inputs=[]) - device_node = [n for n in device_holder.graph.findAllNodes("prim::Constant") if "Device" in repr(n)][-1] - - def _node_get(node: torch._C.Node, key: str): - """Gets attributes of a node which is polymorphic over return type. - - From https://github.com/pytorch/pytorch/pull/82628 - """ - sel = node.kindOf(key) - return getattr(node, sel)(key) - - def patch_device(module): - try: - graphs = [module.graph] if hasattr(module, "graph") else [] - except RuntimeError: - graphs = [] - - if hasattr(module, "forward1"): - graphs.append(module.forward1.graph) - - for graph in graphs: - for node in graph.findAllNodes("prim::Constant"): - if "value" in node.attributeNames() and str(_node_get(node, "value")).startswith("cuda"): - node.copyAttributes(device_node) - - model.apply(patch_device) - patch_device(model.encode_image) - patch_device(model.encode_text) - - # patch dtype to float32 on CPU - if str(device) == "cpu": - float_holder = torch.jit.trace(lambda: torch.ones([]).float(), example_inputs=[]) - float_input = list(float_holder.graph.findNode("aten::to").inputs())[1] - float_node = float_input.node() - - def patch_float(module): - try: - graphs = [module.graph] if hasattr(module, "graph") else [] - except RuntimeError: - graphs = [] - - if hasattr(module, "forward1"): - graphs.append(module.forward1.graph) - - for graph in graphs: - for node in graph.findAllNodes("aten::to"): - inputs = list(node.inputs()) - for i in [1, 2]: # dtype can be the second or third argument to aten::to() - if _node_get(inputs[i].node(), "value") == 5: - inputs[i].node().copyAttributes(float_node) - - model.apply(patch_float) - patch_float(model.encode_image) - patch_float(model.encode_text) - - model.float() - - return model, _transform(model.input_resolution.item()) - - -def tokenize(texts: Union[str, List[str]], context_length: int = 77, truncate: bool = False) -> Union[torch.IntTensor, torch.LongTensor]: - """ - Returns the tokenized representation of given input string(s) - - Parameters - ---------- - texts : Union[str, List[str]] - An input string or a list of input strings to tokenize - - context_length : int - The context length to use; all CLIP models use 77 as the context length - - truncate: bool - Whether to truncate the text in case its encoding is longer than the context length - - Returns - ------- - A two-dimensional tensor containing the resulting tokens, shape = [number of input strings, context_length]. - We return LongTensor when torch version is <1.8.0, since older index_select requires indices to be long. - """ - if isinstance(texts, str): - texts = [texts] - - sot_token = _tokenizer.encoder["<|startoftext|>"] - eot_token = _tokenizer.encoder["<|endoftext|>"] - all_tokens = [[sot_token] + _tokenizer.encode(text) + [eot_token] for text in texts] - if packaging.version.parse(torch.__version__) < packaging.version.parse("1.8.0"): - result = torch.zeros(len(all_tokens), context_length, dtype=torch.long) - else: - result = torch.zeros(len(all_tokens), context_length, dtype=torch.int) - - for i, tokens in enumerate(all_tokens): - if len(tokens) > context_length: - if truncate: - tokens = tokens[:context_length] - tokens[-1] = eot_token - else: - raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}") - result[i, :len(tokens)] = torch.tensor(tokens) - - return result diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/box_regression.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/box_regression.py deleted file mode 100644 index b24c123f26faa5f17975fe13b6756151da229b2f..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/box_regression.py +++ /dev/null @@ -1,369 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import math -from typing import List, Tuple, Union -import torch -from fvcore.nn import giou_loss, smooth_l1_loss -from torch.nn import functional as F - -from detectron2.layers import cat, ciou_loss, diou_loss -from detectron2.structures import Boxes - -# Value for clamping large dw and dh predictions. The heuristic is that we clamp -# such that dw and dh are no larger than what would transform a 16px box into a -# 1000px box (based on a small anchor, 16px, and a typical image size, 1000px). -_DEFAULT_SCALE_CLAMP = math.log(1000.0 / 16) - - -__all__ = ["Box2BoxTransform", "Box2BoxTransformRotated", "Box2BoxTransformLinear"] - - -@torch.jit.script -class Box2BoxTransform(object): - """ - The box-to-box transform defined in R-CNN. The transformation is parameterized - by 4 deltas: (dx, dy, dw, dh). The transformation scales the box's width and height - by exp(dw), exp(dh) and shifts a box's center by the offset (dx * width, dy * height). - """ - - def __init__( - self, weights: Tuple[float, float, float, float], scale_clamp: float = _DEFAULT_SCALE_CLAMP - ): - """ - Args: - weights (4-element tuple): Scaling factors that are applied to the - (dx, dy, dw, dh) deltas. In Fast R-CNN, these were originally set - such that the deltas have unit variance; now they are treated as - hyperparameters of the system. - scale_clamp (float): When predicting deltas, the predicted box scaling - factors (dw and dh) are clamped such that they are <= scale_clamp. - """ - self.weights = weights - self.scale_clamp = scale_clamp - - def get_deltas(self, src_boxes, target_boxes): - """ - Get box regression transformation deltas (dx, dy, dw, dh) that can be used - to transform the `src_boxes` into the `target_boxes`. That is, the relation - ``target_boxes == self.apply_deltas(deltas, src_boxes)`` is true (unless - any delta is too large and is clamped). - - Args: - src_boxes (Tensor): source boxes, e.g., object proposals - target_boxes (Tensor): target of the transformation, e.g., ground-truth - boxes. - """ - assert isinstance(src_boxes, torch.Tensor), type(src_boxes) - assert isinstance(target_boxes, torch.Tensor), type(target_boxes) - - src_widths = src_boxes[:, 2] - src_boxes[:, 0] - src_heights = src_boxes[:, 3] - src_boxes[:, 1] - src_ctr_x = src_boxes[:, 0] + 0.5 * src_widths - src_ctr_y = src_boxes[:, 1] + 0.5 * src_heights - - target_widths = target_boxes[:, 2] - target_boxes[:, 0] - target_heights = target_boxes[:, 3] - target_boxes[:, 1] - target_ctr_x = target_boxes[:, 0] + 0.5 * target_widths - target_ctr_y = target_boxes[:, 1] + 0.5 * target_heights - - wx, wy, ww, wh = self.weights - dx = wx * (target_ctr_x - src_ctr_x) / src_widths - dy = wy * (target_ctr_y - src_ctr_y) / src_heights - dw = ww * torch.log(target_widths / src_widths) - dh = wh * torch.log(target_heights / src_heights) - - deltas = torch.stack((dx, dy, dw, dh), dim=1) - assert (src_widths > 0).all().item(), "Input boxes to Box2BoxTransform are not valid!" - return deltas - - def apply_deltas(self, deltas, boxes): - """ - Apply transformation `deltas` (dx, dy, dw, dh) to `boxes`. - - Args: - deltas (Tensor): transformation deltas of shape (N, k*4), where k >= 1. - deltas[i] represents k potentially different class-specific - box transformations for the single box boxes[i]. - boxes (Tensor): boxes to transform, of shape (N, 4) - """ - deltas = deltas.float() # ensure fp32 for decoding precision - boxes = boxes.to(deltas.dtype) - - widths = boxes[:, 2] - boxes[:, 0] - heights = boxes[:, 3] - boxes[:, 1] - ctr_x = boxes[:, 0] + 0.5 * widths - ctr_y = boxes[:, 1] + 0.5 * heights - - wx, wy, ww, wh = self.weights - dx = deltas[:, 0::4] / wx - dy = deltas[:, 1::4] / wy - dw = deltas[:, 2::4] / ww - dh = deltas[:, 3::4] / wh - - # Prevent sending too large values into torch.exp() - dw = torch.clamp(dw, max=self.scale_clamp) - dh = torch.clamp(dh, max=self.scale_clamp) - - pred_ctr_x = dx * widths[:, None] + ctr_x[:, None] - pred_ctr_y = dy * heights[:, None] + ctr_y[:, None] - pred_w = torch.exp(dw) * widths[:, None] - pred_h = torch.exp(dh) * heights[:, None] - - x1 = pred_ctr_x - 0.5 * pred_w - y1 = pred_ctr_y - 0.5 * pred_h - x2 = pred_ctr_x + 0.5 * pred_w - y2 = pred_ctr_y + 0.5 * pred_h - pred_boxes = torch.stack((x1, y1, x2, y2), dim=-1) - return pred_boxes.reshape(deltas.shape) - - -@torch.jit.script -class Box2BoxTransformRotated(object): - """ - The box-to-box transform defined in Rotated R-CNN. The transformation is parameterized - by 5 deltas: (dx, dy, dw, dh, da). The transformation scales the box's width and height - by exp(dw), exp(dh), shifts a box's center by the offset (dx * width, dy * height), - and rotate a box's angle by da (radians). - Note: angles of deltas are in radians while angles of boxes are in degrees. - """ - - def __init__( - self, - weights: Tuple[float, float, float, float, float], - scale_clamp: float = _DEFAULT_SCALE_CLAMP, - ): - """ - Args: - weights (5-element tuple): Scaling factors that are applied to the - (dx, dy, dw, dh, da) deltas. These are treated as - hyperparameters of the system. - scale_clamp (float): When predicting deltas, the predicted box scaling - factors (dw and dh) are clamped such that they are <= scale_clamp. - """ - self.weights = weights - self.scale_clamp = scale_clamp - - def get_deltas(self, src_boxes, target_boxes): - """ - Get box regression transformation deltas (dx, dy, dw, dh, da) that can be used - to transform the `src_boxes` into the `target_boxes`. That is, the relation - ``target_boxes == self.apply_deltas(deltas, src_boxes)`` is true (unless - any delta is too large and is clamped). - - Args: - src_boxes (Tensor): Nx5 source boxes, e.g., object proposals - target_boxes (Tensor): Nx5 target of the transformation, e.g., ground-truth - boxes. - """ - assert isinstance(src_boxes, torch.Tensor), type(src_boxes) - assert isinstance(target_boxes, torch.Tensor), type(target_boxes) - - src_ctr_x, src_ctr_y, src_widths, src_heights, src_angles = torch.unbind(src_boxes, dim=1) - - target_ctr_x, target_ctr_y, target_widths, target_heights, target_angles = torch.unbind( - target_boxes, dim=1 - ) - - wx, wy, ww, wh, wa = self.weights - dx = wx * (target_ctr_x - src_ctr_x) / src_widths - dy = wy * (target_ctr_y - src_ctr_y) / src_heights - dw = ww * torch.log(target_widths / src_widths) - dh = wh * torch.log(target_heights / src_heights) - # Angles of deltas are in radians while angles of boxes are in degrees. - # the conversion to radians serve as a way to normalize the values - da = target_angles - src_angles - da = (da + 180.0) % 360.0 - 180.0 # make it in [-180, 180) - da *= wa * math.pi / 180.0 - - deltas = torch.stack((dx, dy, dw, dh, da), dim=1) - assert ( - (src_widths > 0).all().item() - ), "Input boxes to Box2BoxTransformRotated are not valid!" - return deltas - - def apply_deltas(self, deltas, boxes): - """ - Apply transformation `deltas` (dx, dy, dw, dh, da) to `boxes`. - - Args: - deltas (Tensor): transformation deltas of shape (N, k*5). - deltas[i] represents box transformation for the single box boxes[i]. - boxes (Tensor): boxes to transform, of shape (N, 5) - """ - assert deltas.shape[1] % 5 == 0 and boxes.shape[1] == 5 - - boxes = boxes.to(deltas.dtype).unsqueeze(2) - - ctr_x = boxes[:, 0] - ctr_y = boxes[:, 1] - widths = boxes[:, 2] - heights = boxes[:, 3] - angles = boxes[:, 4] - - wx, wy, ww, wh, wa = self.weights - - dx = deltas[:, 0::5] / wx - dy = deltas[:, 1::5] / wy - dw = deltas[:, 2::5] / ww - dh = deltas[:, 3::5] / wh - da = deltas[:, 4::5] / wa - - # Prevent sending too large values into torch.exp() - dw = torch.clamp(dw, max=self.scale_clamp) - dh = torch.clamp(dh, max=self.scale_clamp) - - pred_boxes = torch.zeros_like(deltas) - pred_boxes[:, 0::5] = dx * widths + ctr_x # x_ctr - pred_boxes[:, 1::5] = dy * heights + ctr_y # y_ctr - pred_boxes[:, 2::5] = torch.exp(dw) * widths # width - pred_boxes[:, 3::5] = torch.exp(dh) * heights # height - - # Following original RRPN implementation, - # angles of deltas are in radians while angles of boxes are in degrees. - pred_angle = da * 180.0 / math.pi + angles - pred_angle = (pred_angle + 180.0) % 360.0 - 180.0 # make it in [-180, 180) - - pred_boxes[:, 4::5] = pred_angle - - return pred_boxes - - -class Box2BoxTransformLinear(object): - """ - The linear box-to-box transform defined in FCOS. The transformation is parameterized - by the distance from the center of (square) src box to 4 edges of the target box. - """ - - def __init__(self, normalize_by_size=True): - """ - Args: - normalize_by_size: normalize deltas by the size of src (anchor) boxes. - """ - self.normalize_by_size = normalize_by_size - - def get_deltas(self, src_boxes, target_boxes): - """ - Get box regression transformation deltas (dx1, dy1, dx2, dy2) that can be used - to transform the `src_boxes` into the `target_boxes`. That is, the relation - ``target_boxes == self.apply_deltas(deltas, src_boxes)`` is true. - The center of src must be inside target boxes. - - Args: - src_boxes (Tensor): square source boxes, e.g., anchors - target_boxes (Tensor): target of the transformation, e.g., ground-truth - boxes. - """ - assert isinstance(src_boxes, torch.Tensor), type(src_boxes) - assert isinstance(target_boxes, torch.Tensor), type(target_boxes) - - src_ctr_x = 0.5 * (src_boxes[:, 0] + src_boxes[:, 2]) - src_ctr_y = 0.5 * (src_boxes[:, 1] + src_boxes[:, 3]) - - target_l = src_ctr_x - target_boxes[:, 0] - target_t = src_ctr_y - target_boxes[:, 1] - target_r = target_boxes[:, 2] - src_ctr_x - target_b = target_boxes[:, 3] - src_ctr_y - - deltas = torch.stack((target_l, target_t, target_r, target_b), dim=1) - if self.normalize_by_size: - stride_w = src_boxes[:, 2] - src_boxes[:, 0] - stride_h = src_boxes[:, 3] - src_boxes[:, 1] - strides = torch.stack([stride_w, stride_h, stride_w, stride_h], axis=1) - deltas = deltas / strides - - return deltas - - def apply_deltas(self, deltas, boxes): - """ - Apply transformation `deltas` (dx1, dy1, dx2, dy2) to `boxes`. - - Args: - deltas (Tensor): transformation deltas of shape (N, k*4), where k >= 1. - deltas[i] represents k potentially different class-specific - box transformations for the single box boxes[i]. - boxes (Tensor): boxes to transform, of shape (N, 4) - """ - # Ensure the output is a valid box. See Sec 2.1 of https://arxiv.org/abs/2006.09214 - deltas = F.relu(deltas) - boxes = boxes.to(deltas.dtype) - - ctr_x = 0.5 * (boxes[:, 0] + boxes[:, 2]) - ctr_y = 0.5 * (boxes[:, 1] + boxes[:, 3]) - if self.normalize_by_size: - stride_w = boxes[:, 2] - boxes[:, 0] - stride_h = boxes[:, 3] - boxes[:, 1] - strides = torch.stack([stride_w, stride_h, stride_w, stride_h], axis=1) - deltas = deltas * strides - - l = deltas[:, 0::4] - t = deltas[:, 1::4] - r = deltas[:, 2::4] - b = deltas[:, 3::4] - - pred_boxes = torch.zeros_like(deltas) - pred_boxes[:, 0::4] = ctr_x[:, None] - l # x1 - pred_boxes[:, 1::4] = ctr_y[:, None] - t # y1 - pred_boxes[:, 2::4] = ctr_x[:, None] + r # x2 - pred_boxes[:, 3::4] = ctr_y[:, None] + b # y2 - return pred_boxes - - -def _dense_box_regression_loss( - anchors: List[Union[Boxes, torch.Tensor]], - box2box_transform: Box2BoxTransform, - pred_anchor_deltas: List[torch.Tensor], - gt_boxes: List[torch.Tensor], - fg_mask: torch.Tensor, - box_reg_loss_type="smooth_l1", - smooth_l1_beta=0.0, -): - """ - Compute loss for dense multi-level box regression. - Loss is accumulated over ``fg_mask``. - - Args: - anchors: #lvl anchor boxes, each is (HixWixA, 4) - pred_anchor_deltas: #lvl predictions, each is (N, HixWixA, 4) - gt_boxes: N ground truth boxes, each has shape (R, 4) (R = sum(Hi * Wi * A)) - fg_mask: the foreground boolean mask of shape (N, R) to compute loss on - box_reg_loss_type (str): Loss type to use. Supported losses: "smooth_l1", "giou", - "diou", "ciou". - smooth_l1_beta (float): beta parameter for the smooth L1 regression loss. Default to - use L1 loss. Only used when `box_reg_loss_type` is "smooth_l1" - """ - if isinstance(anchors[0], Boxes): - anchors = type(anchors[0]).cat(anchors).tensor # (R, 4) - else: - anchors = cat(anchors) - if box_reg_loss_type == "smooth_l1": - gt_anchor_deltas = [box2box_transform.get_deltas(anchors, k) for k in gt_boxes] - gt_anchor_deltas = torch.stack(gt_anchor_deltas) # (N, R, 4) - loss_box_reg = smooth_l1_loss( - cat(pred_anchor_deltas, dim=1)[fg_mask], - gt_anchor_deltas[fg_mask], - beta=smooth_l1_beta, - reduction="sum", - ) - elif box_reg_loss_type == "giou": - pred_boxes = [ - box2box_transform.apply_deltas(k, anchors) for k in cat(pred_anchor_deltas, dim=1) - ] - loss_box_reg = giou_loss( - torch.stack(pred_boxes)[fg_mask], torch.stack(gt_boxes)[fg_mask], reduction="sum" - ) - elif box_reg_loss_type == "diou": - pred_boxes = [ - box2box_transform.apply_deltas(k, anchors) for k in cat(pred_anchor_deltas, dim=1) - ] - loss_box_reg = diou_loss( - torch.stack(pred_boxes)[fg_mask], torch.stack(gt_boxes)[fg_mask], reduction="sum" - ) - elif box_reg_loss_type == "ciou": - pred_boxes = [ - box2box_transform.apply_deltas(k, anchors) for k in cat(pred_anchor_deltas, dim=1) - ] - loss_box_reg = ciou_loss( - torch.stack(pred_boxes)[fg_mask], torch.stack(gt_boxes)[fg_mask], reduction="sum" - ) - else: - raise ValueError(f"Invalid dense box regression loss type '{box_reg_loss_type}'") - return loss_box_reg diff --git a/spaces/Tetel/secondbing/EdgeGPT/conversation_style.py b/spaces/Tetel/secondbing/EdgeGPT/conversation_style.py deleted file mode 100644 index 284ae24b387333b63cd866ab5fa691e7592b337d..0000000000000000000000000000000000000000 --- a/spaces/Tetel/secondbing/EdgeGPT/conversation_style.py +++ /dev/null @@ -1,63 +0,0 @@ -from enum import Enum - -try: - from typing import Union, Literal -except ImportError: - from typing_extensions import Literal -from typing import Optional - - -class ConversationStyle(Enum): - creative = [ - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - "iycapbing", - "iyxapbing", - "rai271", - "prtime2t", - "smartname", - "enbsnptrc", - "dv3sugg", - "iyoloxap", - "iyoloneutral", - "h3imaginative", - "saharagenconv5", - "dsblhlthcrd", - "clgalileo", - "gencontentv3", - ] - balanced = [ - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - "galileo", - "saharagenconv5", - "objopinion", - "dsblhlthcrd", - "dv3sugg", - "autosave", - ] - precise = [ - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - "h3precise", - "objopinion", - "dsblhlthcrd", - "dv3sugg", - "autosave", - "clgalileo", - "gencontentv3", - ] - - -CONVERSATION_STYLE_TYPE = Optional[ - Union[ConversationStyle, Literal["creative", "balanced", "precise"]] -] diff --git a/spaces/Tommyyyyyy-20/text_generator/README.md b/spaces/Tommyyyyyy-20/text_generator/README.md deleted file mode 100644 index 4da423a28d3f439b85081fdb80fef2c6177478d7..0000000000000000000000000000000000000000 --- a/spaces/Tommyyyyyy-20/text_generator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text Generator -emoji: 💩 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.11.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/TornikeO/dreambooth-training/README.md b/spaces/TornikeO/dreambooth-training/README.md deleted file mode 100644 index 66a852da46de13165bc3419e7e427c8ad76b97e0..0000000000000000000000000000000000000000 --- a/spaces/TornikeO/dreambooth-training/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Dreambooth Training -emoji: ☁️ -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.11 -app_file: app.py -pinned: false -license: mit -duplicated_from: multimodalart/dreambooth-training ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Vageesh1/clip_gpt2/app.py b/spaces/Vageesh1/clip_gpt2/app.py deleted file mode 100644 index 4c46d89616d581fe71539f2fd2db1360a5d6034e..0000000000000000000000000000000000000000 --- a/spaces/Vageesh1/clip_gpt2/app.py +++ /dev/null @@ -1,84 +0,0 @@ -import torch -import clip -import PIL.Image -from PIL import Image -import skimage.io as io -import streamlit as st -from transformers import GPT2Tokenizer, GPT2LMHeadModel, AdamW, get_linear_schedule_with_warmup -from transformers import GPT2TokenizerFast, ViTImageProcessor, VisionEncoderDecoderModel -from model import generate2,ClipCaptionModel -from engine import inference - - -model_trained = VisionEncoderDecoderModel.from_pretrained("nlpconnect/vit-gpt2-image-captioning") -model_trained.load_state_dict(torch.load('model_trained.pth',map_location=torch.device('cpu')),strict=False) -image_processor = ViTImageProcessor.from_pretrained("nlpconnect/vit-gpt2-image-captioning") -tokenizer = GPT2TokenizerFast.from_pretrained("nlpconnect/vit-gpt2-image-captioning") - -def show_n_generate(img, model, greedy = True): - image = Image.open(img) - pixel_values = image_processor(image, return_tensors ="pt").pixel_values - - if greedy: - generated_ids = model.generate(pixel_values, max_new_tokens = 30) - else: - generated_ids = model.generate( - pixel_values, - do_sample=True, - max_new_tokens = 30, - top_k=5) - generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] - return generated_text - -device = "cpu" -clip_model, preprocess = clip.load("ViT-B/32", device=device, jit=False) -tokenizer = GPT2Tokenizer.from_pretrained("gpt2") - -prefix_length = 10 - -model = ClipCaptionModel(prefix_length) - -model.load_state_dict(torch.load('model.h5',map_location=torch.device('cpu')),strict=False) - -model = model.eval() - -coco_model = ClipCaptionModel(prefix_length) -coco_model.load_state_dict(torch.load('COCO_model.h5',map_location=torch.device('cpu')),strict=False) -model = model.eval() - - -def ui(): - st.markdown("# Image Captioning") - # st.markdown("## Done By- Vageesh and Rushil") - uploaded_file = st.file_uploader("Upload an Image", type=['png', 'jpeg', 'jpg']) - - if uploaded_file is not None: - image = io.imread(uploaded_file) - pil_image = PIL.Image.fromarray(image) - image = preprocess(pil_image).unsqueeze(0).to(device) - - option = st.selectbox('Please select the Model',('Clip Captioning','Attention Decoder','VIT+GPT2')) - - if option=='Clip Captioning': - with torch.no_grad(): - prefix = clip_model.encode_image(image).to(device, dtype=torch.float32) - prefix_embed = model.clip_project(prefix).reshape(1, prefix_length, -1) - generated_text_prefix = generate2(model, tokenizer, embed=prefix_embed) - - st.image(uploaded_file, width = 500, channels = 'RGB') - st.markdown("**PREDICTION:** " + generated_text_prefix) - elif option=='Attention Decoder': - out = inference(uploaded_file) - st.image(uploaded_file, width = 500, channels = 'RGB') - st.markdown("**PREDICTION:** " + out) - - # elif option=='VIT+GPT2': - # out=show_n_generate(uploaded_file, greedy = False, model = model_trained) - # st.image(uploaded_file, width = 500, channels = 'RGB') - # st.markdown("**PREDICTION:** " + out) - - - -if __name__ == '__main__': - ui() - diff --git a/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Weuseing.py b/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Weuseing.py deleted file mode 100644 index ba79e8b9c2573418720495a20d4c1c8d5a6ca7e9..0000000000000000000000000000000000000000 --- a/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Weuseing.py +++ /dev/null @@ -1,29 +0,0 @@ -import requests -import os -import json -from ...typing import sha256, Dict, get_type_hints - -url = 'https://api.gptplus.one' -model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs): - headers = { - 'Content-Type': 'application/json', - 'Accept': '*/*', - 'Accept-Language': 'ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7,ja;q=0.6,zh-TW;q=0.5,zh;q=0.4', - } - data = { - 'messages': messages, - 'model': model, - } - response = requests.post('https://api.gptplus.one/chat-process', json=data, stream=True) - print(response) - - for token in response.iter_content(chunk_size=None): - yield (token.decode('utf-8')) - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/script.py b/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/script.py deleted file mode 100644 index c66c6b9992cd0c2a5e20fd97819bc34f9e1435b8..0000000000000000000000000000000000000000 --- a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/script.py +++ /dev/null @@ -1,51 +0,0 @@ -import os, sys, subprocess, inspect -from dataclasses import dataclass -from typing import Any -from argparse import ArgumentParser - - -@dataclass -class Param(): - "A parameter in a function used in `anno_parser` or `call_parse`" - help:str=None - type:type=None - opt:bool=True - action:str=None - nargs:str=None - const:str=None - choices:str=None - required:bool=None - - @property - def pre(self): return '--' if self.opt else '' - @property - def kwargs(self): return {k:v for k,v in self.__dict__.items() - if v is not None and k!='opt'} - -def anno_parser(func): - "Look at params (annotated with `Param`) in func and return an `ArgumentParser`" - p = ArgumentParser(description=func.__doc__) - for k,v in inspect.signature(func).parameters.items(): - param = func.__annotations__.get(k, Param()) - kwargs = param.kwargs - if v.default != inspect.Parameter.empty: kwargs['default'] = v.default - p.add_argument(f"{param.pre}{k}", **kwargs) - return p - -def call_parse(func): - "Decorator to create a simple CLI from `func` using `anno_parser`" - name = inspect.currentframe().f_back.f_globals['__name__'] - if name == "__main__": - args = anno_parser(func).parse_args() - func(**args.__dict__) - else: return func - -def call_plac(f): - "Decorator to create a simple CLI from `func` using `plac`" - name = inspect.currentframe().f_back.f_globals['__name__'] - if name == '__main__': - import plac - res = plac.call(f) - if callable(res): res() - else: return f - diff --git a/spaces/XzJosh/nine1-Bert-VITS2/models.py b/spaces/XzJosh/nine1-Bert-VITS2/models.py deleted file mode 100644 index d4afe44d883691610c5903e602a3ca245fcb3a5c..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/nine1-Bert-VITS2/models.py +++ /dev/null @@ -1,707 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -from commons import init_weights, get_padding -from text import symbols, num_tones, num_languages -class DurationDiscriminator(nn.Module): #vits2 - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.dur_proj = nn.Conv1d(1, filter_channels, 1) - - self.pre_out_conv_1 = nn.Conv1d(2*filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.pre_out_norm_1 = modules.LayerNorm(filter_channels) - self.pre_out_conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.pre_out_norm_2 = modules.LayerNorm(filter_channels) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - self.output_layer = nn.Sequential( - nn.Linear(filter_channels, 1), - nn.Sigmoid() - ) - - def forward_probability(self, x, x_mask, dur, g=None): - dur = self.dur_proj(dur) - x = torch.cat([x, dur], dim=1) - x = self.pre_out_conv_1(x * x_mask) - x = torch.relu(x) - x = self.pre_out_norm_1(x) - x = self.drop(x) - x = self.pre_out_conv_2(x * x_mask) - x = torch.relu(x) - x = self.pre_out_norm_2(x) - x = self.drop(x) - x = x * x_mask - x = x.transpose(1, 2) - output_prob = self.output_layer(x) - return output_prob - - def forward(self, x, x_mask, dur_r, dur_hat, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - - output_probs = [] - for dur in [dur_r, dur_hat]: - output_prob = self.forward_probability(x, x_mask, dur, g) - output_probs.append(output_prob) - - return output_probs - -class TransformerCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - n_flows=4, - gin_channels=0, - share_parameter=False - ): - - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - - self.wn = attentions.FFT(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = self.gin_channels) if share_parameter else None - - for i in range(n_flows): - self.flows.append( - modules.TransformerCouplingLayer(channels, hidden_channels, kernel_size, n_layers, n_heads, p_dropout, filter_channels, mean_only=True, wn_sharing_parameter=self.wn, gin_channels = self.gin_channels)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2]) - logq = torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q ** 2)) * x_mask, [1, 2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2 * math.pi) + (z ** 2)) * x_mask, [1, 2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=0): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - self.emb = nn.Embedding(len(symbols), hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5) - self.tone_emb = nn.Embedding(num_tones, hidden_channels) - nn.init.normal_(self.tone_emb.weight, 0.0, hidden_channels ** -0.5) - self.language_emb = nn.Embedding(num_languages, hidden_channels) - nn.init.normal_(self.language_emb.weight, 0.0, hidden_channels ** -0.5) - self.bert_proj = nn.Conv1d(1024, hidden_channels, 1) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=self.gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, tone, language, bert, g=None): - x = (self.emb(x)+ self.tone_emb(tone)+ self.language_emb(language)+self.bert_proj(bert).transpose(1,2)) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask, g=g) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, - gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - -class ReferenceEncoder(nn.Module): - ''' - inputs --- [N, Ty/r, n_mels*r] mels - outputs --- [N, ref_enc_gru_size] - ''' - - def __init__(self, spec_channels, gin_channels=0): - - super().__init__() - self.spec_channels = spec_channels - ref_enc_filters = [32, 32, 64, 64, 128, 128] - K = len(ref_enc_filters) - filters = [1] + ref_enc_filters - convs = [weight_norm(nn.Conv2d(in_channels=filters[i], - out_channels=filters[i + 1], - kernel_size=(3, 3), - stride=(2, 2), - padding=(1, 1))) for i in range(K)] - self.convs = nn.ModuleList(convs) - # self.wns = nn.ModuleList([weight_norm(num_features=ref_enc_filters[i]) for i in range(K)]) - - out_channels = self.calculate_channels(spec_channels, 3, 2, 1, K) - self.gru = nn.GRU(input_size=ref_enc_filters[-1] * out_channels, - hidden_size=256 // 2, - batch_first=True) - self.proj = nn.Linear(128, gin_channels) - - def forward(self, inputs, mask=None): - N = inputs.size(0) - out = inputs.view(N, 1, -1, self.spec_channels) # [N, 1, Ty, n_freqs] - for conv in self.convs: - out = conv(out) - # out = wn(out) - out = F.relu(out) # [N, 128, Ty//2^K, n_mels//2^K] - - out = out.transpose(1, 2) # [N, Ty//2^K, 128, n_mels//2^K] - T = out.size(1) - N = out.size(0) - out = out.contiguous().view(N, T, -1) # [N, Ty//2^K, 128*n_mels//2^K] - - self.gru.flatten_parameters() - memory, out = self.gru(out) # out --- [1, N, 128] - - return self.proj(out.squeeze(0)) - - def calculate_channels(self, L, kernel_size, stride, pad, n_convs): - for i in range(n_convs): - L = (L - kernel_size + 2 * pad) // stride + 1 - return L - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=256, - gin_channels=256, - use_sdp=True, - n_flow_layer = 4, - n_layers_trans_flow = 3, - flow_share_parameter = False, - use_transformer_flow = True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - self.n_layers_trans_flow = n_layers_trans_flow - self.use_spk_conditioned_encoder = kwargs.get("use_spk_conditioned_encoder", True) - self.use_sdp = use_sdp - self.use_noise_scaled_mas = kwargs.get("use_noise_scaled_mas", False) - self.mas_noise_scale_initial = kwargs.get("mas_noise_scale_initial", 0.01) - self.noise_scale_delta = kwargs.get("noise_scale_delta", 2e-6) - self.current_mas_noise_scale = self.mas_noise_scale_initial - if self.use_spk_conditioned_encoder and gin_channels > 0: - self.enc_gin_channels = gin_channels - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=self.enc_gin_channels) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, - gin_channels=gin_channels) - if use_transformer_flow: - self.flow = TransformerCouplingBlock(inter_channels, hidden_channels, filter_channels, n_heads, n_layers_trans_flow, 5, p_dropout, n_flow_layer, gin_channels=gin_channels,share_parameter= flow_share_parameter) - else: - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, n_flow_layer, gin_channels=gin_channels) - self.sdp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers >= 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - else: - self.ref_enc = ReferenceEncoder(spec_channels, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid, tone, language, bert): - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1) - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), - s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - if self.use_noise_scaled_mas: - epsilon = torch.std(neg_cent) * torch.randn_like(neg_cent) * self.current_mas_noise_scale - neg_cent = neg_cent + epsilon - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - - l_length_sdp = self.sdp(x, x_mask, w, g=g) - l_length_sdp = l_length_sdp / torch.sum(x_mask) - - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length_dp = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum(x_mask) # for averaging - - l_length = l_length_dp + l_length_sdp - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q), (x, logw, logw_) - - def infer(self, x, x_lengths, sid, tone, language, bert, noise_scale=.667, length_scale=1, noise_scale_w=0.8, max_len=None, sdp_ratio=0,y=None): - #x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert) - # g = self.gst(y) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1) - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g) - logw = self.sdp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) * (sdp_ratio) + self.dp(x, x_mask, g=g) * (1 - sdp_ratio) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, - 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:, :, :max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/app.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/app.py deleted file mode 100644 index 9e0133a0fb21a16bf8ea998dc82846a5eda2cfd4..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/app.py +++ /dev/null @@ -1,344 +0,0 @@ -import os -import random - -import autocuda -from pyabsa.utils.pyabsa_utils import fprint - -from diffusers import AutoencoderKL, UNet2DConditionModel, StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, \ - DPMSolverMultistepScheduler -import gradio as gr -import torch -from PIL import Image -import utils -import datetime -import time -import psutil -from Waifu2x.magnify import ImageMagnifier - -magnifier = ImageMagnifier() - -start_time = time.time() -is_colab = utils.is_google_colab() - -CUDA_VISIBLE_DEVICES = '' -device = autocuda.auto_cuda() - -dtype = torch.float16 if device != 'cpu' else torch.float32 - - -class Model: - def __init__(self, name, path="", prefix=""): - self.name = name - self.path = path - self.prefix = prefix - self.pipe_t2i = None - self.pipe_i2i = None - - -models = [ - Model("anything v3", "Linaqruf/anything-v3.0", "anything v3 style"), -] -# Model("Spider-Verse", "nitrosocke/spider-verse-diffusion", "spiderverse style "), -# Model("Balloon Art", "Fictiverse/Stable_Diffusion_BalloonArt_Model", "BalloonArt "), -# Model("Elden Ring", "nitrosocke/elden-ring-diffusion", "elden ring style "), -# Model("Tron Legacy", "dallinmackay/Tron-Legacy-diffusion", "trnlgcy ") -# Model("Pokémon", "lambdalabs/sd-pokemon-diffusers", ""), -# Model("Pony Diffusion", "AstraliteHeart/pony-diffusion", ""), -# Model("Robo Diffusion", "nousr/robo-diffusion", ""), - -scheduler = DPMSolverMultistepScheduler( - beta_start=0.00085, - beta_end=0.012, - beta_schedule="scaled_linear", - num_train_timesteps=1000, - trained_betas=None, - predict_epsilon=True, - thresholding=False, - algorithm_type="dpmsolver++", - solver_type="midpoint", - lower_order_final=True, -) - -custom_model = None -if is_colab: - models.insert(0, Model("Custom model")) - custom_model = models[0] - -last_mode = "txt2img" -current_model = models[1] if is_colab else models[0] -current_model_path = current_model.path - -if is_colab: - pipe = StableDiffusionPipeline.from_pretrained(current_model.path, torch_dtype=dtype, scheduler=scheduler, - safety_checker=lambda images, clip_input: (images, False)) - -else: # download all models - print(f"{datetime.datetime.now()} Downloading vae...") - vae = AutoencoderKL.from_pretrained(current_model.path, subfolder="vae", torch_dtype=dtype) - for model in models: - try: - print(f"{datetime.datetime.now()} Downloading {model.name} model...") - unet = UNet2DConditionModel.from_pretrained(model.path, subfolder="unet", torch_dtype=dtype) - model.pipe_t2i = StableDiffusionPipeline.from_pretrained(model.path, unet=unet, vae=vae, - torch_dtype=dtype, scheduler=scheduler, - safety_checker=None) - model.pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained(model.path, unet=unet, vae=vae, - torch_dtype=dtype, - scheduler=scheduler, safety_checker=None) - except Exception as e: - print(f"{datetime.datetime.now()} Failed to load model " + model.name + ": " + str(e)) - models.remove(model) - pipe = models[0].pipe_t2i - -# model.pipe_i2i = torch.compile(model.pipe_i2i) -# model.pipe_t2i = torch.compile(model.pipe_t2i) -if torch.cuda.is_available(): - pipe = pipe.to(device) - - -# device = "GPU 🔥" if torch.cuda.is_available() else "CPU 🥶" - - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - - -def custom_model_changed(path): - models[0].path = path - global current_model - current_model = models[0] - - -def on_model_change(model_name): - prefix = "Enter prompt. \"" + next((m.prefix for m in models if m.name == model_name), - None) + "\" is prefixed automatically" if model_name != models[ - 0].name else "Don't forget to use the custom model prefix in the prompt!" - - return gr.update(visible=model_name == models[0].name), gr.update(placeholder=prefix) - - -def inference(model_name, prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, - neg_prompt="", scale_factor=2): - fprint(psutil.virtual_memory()) # print memory usage - prompt = 'detailed fingers, beautiful hands,' + prompt - fprint(f"Prompt: {prompt}") - global current_model - for model in models: - if model.name == model_name: - current_model = model - model_path = current_model.path - - generator = torch.Generator(device).manual_seed(seed) if seed != 0 else None - - try: - if img is not None: - return img_to_img(model_path, prompt, neg_prompt, img, strength, guidance, steps, width, height, - generator, scale_factor), None - else: - return txt_to_img(model_path, prompt, neg_prompt, guidance, steps, width, height, generator, - scale_factor), None - except Exception as e: - return None, error_str(e) - # if img is not None: - # return img_to_img(model_path, prompt, neg_prompt, img, strength, guidance, steps, width, height, - # generator, scale_factor), None - # else: - # return txt_to_img(model_path, prompt, neg_prompt, guidance, steps, width, height, generator, scale_factor), None - - -def txt_to_img(model_path, prompt, neg_prompt, guidance, steps, width, height, generator, scale_factor): - print(f"{datetime.datetime.now()} txt_to_img, model: {current_model.name}") - - global last_mode - global pipe - global current_model_path - if model_path != current_model_path or last_mode != "txt2img": - current_model_path = model_path - - if is_colab or current_model == custom_model: - pipe = StableDiffusionPipeline.from_pretrained(current_model_path, torch_dtype=dtype, - scheduler=scheduler, - safety_checker=lambda images, clip_input: (images, False)) - else: - # pipe = pipe.to("cpu") - pipe = current_model.pipe_t2i - - if torch.cuda.is_available(): - pipe = pipe.to(device) - last_mode = "txt2img" - - prompt = current_model.prefix + prompt - result = pipe( - prompt, - negative_prompt=neg_prompt, - # num_images_per_prompt=n_images, - num_inference_steps=int(steps), - guidance_scale=guidance, - width=width, - height=height, - generator=generator) - result.images[0] = magnifier.magnify(result.images[0], scale_factor=scale_factor) - - # save image - result.images[0].save("imgs/result-{}.png".format(datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))) - return replace_nsfw_images(result) - - -def img_to_img(model_path, prompt, neg_prompt, img, strength, guidance, steps, width, height, generator, scale_factor): - fprint(f"{datetime.datetime.now()} img_to_img, model: {model_path}") - - global last_mode - global pipe - global current_model_path - if model_path != current_model_path or last_mode != "img2img": - current_model_path = model_path - - if is_colab or current_model == custom_model: - pipe = StableDiffusionImg2ImgPipeline.from_pretrained(current_model_path, torch_dtype=dtype, - scheduler=scheduler, - safety_checker=lambda images, clip_input: ( - images, False)) - else: - # pipe = pipe.to("cpu") - pipe = current_model.pipe_i2i - - if torch.cuda.is_available(): - pipe = pipe.to(device) - last_mode = "img2img" - - prompt = current_model.prefix + prompt - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe( - prompt, - negative_prompt=neg_prompt, - # num_images_per_prompt=n_images, - image=img, - num_inference_steps=int(steps), - strength=strength, - guidance_scale=guidance, - # width=width, - # height=height, - generator=generator) - result.images[0] = magnifier.magnify(result.images[0], scale_factor=scale_factor) - - # save image - result.images[0].save("imgs/result-{}.png".format(datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))) - return replace_nsfw_images(result) - - -def replace_nsfw_images(results): - if is_colab: - return results.images[0] - if hasattr(results, "nsfw_content_detected") and results.nsfw_content_detected: - for i in range(len(results.images)): - if results.nsfw_content_detected[i]: - results.images[i] = Image.open("nsfw.png") - return results.images[0] - - -css = """.finetuned-diffusion-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.finetuned-diffusion-div div h1{font-weight:900;margin-bottom:7px}.finetuned-diffusion-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} -""" -with gr.Blocks(css=css) as demo: - if not os.path.exists('imgs'): - os.mkdir('imgs') - - gr.Markdown('# Super Resolution Anime Diffusion') - gr.Markdown( - "## Author: [yangheng95](https://github.com/yangheng95) Github:[Github](https://github.com/yangheng95/SuperResolutionAnimeDiffusion)") - gr.Markdown("### This demo is running on a CPU, so it will take at least 20 minutes. " - "If you have a GPU, you can clone from [Github](https://github.com/yangheng95/SuperResolutionAnimeDiffusion) and run it locally.") - gr.Markdown("### FYI: to generate a 512*512 image and magnify 4x, it only takes 5~8 seconds on a RTX 2080 GPU") - gr.Markdown( - "### You can duplicate this demo on HuggingFace Spaces, click [here](https://huggingface.co/spaces/yangheng/Super-Resolution-Anime-Diffusion?duplicate=true)") - - with gr.Row(): - with gr.Column(scale=55): - with gr.Group(): - gr.Markdown("Text to image") - - model_name = gr.Dropdown(label="Model", choices=[m.name for m in models], value=current_model.name) - - with gr.Box(visible=False) as custom_model_group: - custom_model_path = gr.Textbox(label="Custom model path", - placeholder="Path to model, e.g. nitrosocke/Arcane-Diffusion", - interactive=True) - gr.HTML( - "
    Custom models have to be downloaded first, so give it some time.
    ") - - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2, - placeholder="Enter prompt. Style applied automatically").style(container=False) - with gr.Row(): - generate = gr.Button(value="Generate") - - with gr.Row(): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - - image_out = gr.Image(height=512) - # gallery = gr.Gallery( - # label="Generated images", show_label=False, elem_id="gallery" - # ).style(grid=[1], height="auto") - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Group(): - gr.Markdown("Image to Image") - - with gr.Row(): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, - value=0.5) - - with gr.Row(): - with gr.Group(): - # n_images = gr.Slider(label="Images", value=1, minimum=1, maximum=4, step=1) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=15, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8) - with gr.Row(): - scale_factor = gr.Slider(1, 8, label='Scale factor (to magnify image) (1, 2, 4, 8)', - value=2, - step=1) - - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - if is_colab: - model_name.change(on_model_change, inputs=model_name, outputs=[custom_model_group, prompt], queue=False) - custom_model_path.change(custom_model_changed, inputs=custom_model_path, outputs=None) - # n_images.change(lambda n: gr.Gallery().style(grid=[2 if n > 1 else 1], height="auto"), inputs=n_images, outputs=gallery) - - gr.Markdown("### based on [Anything V3](https://huggingface.co/Linaqruf/anything-v3.0)") - - inputs = [model_name, prompt, guidance, steps, width, height, seed, image, strength, neg_prompt, scale_factor] - outputs = [image_out, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs, api_name="generate") - - prompt_keys = [ - 'girl', 'lovely', 'cute', 'beautiful eyes', 'cumulonimbus clouds', 'detailed fingers', - random.choice(['dress']), - random.choice(['white hair']), - random.choice(['blue eyes']), - random.choice(['flower meadow']), - random.choice(['Elif', 'Angel']) - ] - prompt.value = ','.join(prompt_keys) - ex = gr.Examples([ - [models[0].name, prompt.value, 7.5, 15], - - ], inputs=[model_name, prompt, guidance, steps, seed], outputs=outputs, fn=inference, cache_examples=False) - -print(f"Space built in {time.time() - start_time:.2f} seconds") - -if not is_colab: - demo.queue(concurrency_count=2) -demo.launch(debug=is_colab, enable_queue=True, share=is_colab) diff --git a/spaces/YlcldKlns/bing/src/components/chat.tsx b/spaces/YlcldKlns/bing/src/components/chat.tsx deleted file mode 100644 index a37ab1cc96ca2e6bfd9acbe313a8d946bfd5c3d4..0000000000000000000000000000000000000000 --- a/spaces/YlcldKlns/bing/src/components/chat.tsx +++ /dev/null @@ -1,93 +0,0 @@ -'use client' - -import { useCallback, useEffect, useMemo, useState } from 'react' -import { useAtom } from 'jotai' -import Image from 'next/image' -import { cn } from '@/lib/utils' -import { ChatList } from '@/components/chat-list' -import { ChatPanel } from '@/components/chat-panel' -import { WelcomeScreen } from '@/components/welcome-screen' -import { ChatScrollAnchor } from '@/components/chat-scroll-anchor' -import { ToneSelector } from './tone-selector' -import { ChatHeader } from './chat-header' -import { ChatSuggestions } from './chat-suggestions' -import { bingConversationStyleAtom } from '@/state' -import { ButtonScrollToBottom } from '@/components/button-scroll-to-bottom' -import StopIcon from '@/assets/images/stop.svg' -import { useBing } from '@/lib/hooks/use-bing' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { ChatNotification } from './chat-notification' -import { Settings } from './settings' -import { ChatHistory } from './chat-history' - -export type ChatProps = React.ComponentProps<'div'> & { initialMessages?: ChatMessageModel[] } - -export default function Chat({ className }: ChatProps) { - - const [bingStyle, setBingStyle] = useAtom(bingConversationStyleAtom) - const { - messages, - sendMessage, - resetConversation, - stopGenerating, - setInput, - bot, - input, - generating, - isSpeaking, - uploadImage, - attachmentList, - setAttachmentList, - } = useBing() - - useEffect(() => { - window.scrollTo({ - top: document.body.offsetHeight, - behavior: 'smooth' - }) - }, []) - - return ( -
    - -
    - - - - {messages.length ? ( - <> - - - - - - {generating ? ( -
    - -
    - ) : null} - - ) : null} -
    - - -
    - ) -} diff --git a/spaces/YlcldKlns/bing/src/lib/isomorphic/browser.ts b/spaces/YlcldKlns/bing/src/lib/isomorphic/browser.ts deleted file mode 100644 index de125b1f1786d1618cb1ff47f403d76c6784f4ce..0000000000000000000000000000000000000000 --- a/spaces/YlcldKlns/bing/src/lib/isomorphic/browser.ts +++ /dev/null @@ -1,11 +0,0 @@ -'use client' - -const debug = console.info.bind(console) - -class WebSocketAlias extends WebSocket { - constructor(address: string | URL, ...args: any) { - super(address) - } -} - -export default { fetch, WebSocket: WebSocketAlias, debug } diff --git a/spaces/YouLiXiya/Mobile-SAM/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h b/spaces/YouLiXiya/Mobile-SAM/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h deleted file mode 100644 index b2b88e8c46f19b6db0933163e57ccdb51180f517..0000000000000000000000000000000000000000 --- a/spaces/YouLiXiya/Mobile-SAM/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h +++ /dev/null @@ -1,35 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -#pragma once -#include - -namespace groundingdino { - -at::Tensor -ms_deform_attn_cpu_forward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step); - -std::vector -ms_deform_attn_cpu_backward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const at::Tensor &grad_output, - const int im2col_step); - -} // namespace groundingdino diff --git a/spaces/Yusin/ChatGPT-Speech/pygpt.py b/spaces/Yusin/ChatGPT-Speech/pygpt.py deleted file mode 100644 index 7a12a791c61ab48b7ab0bdc9fbc6f9463bb5fe02..0000000000000000000000000000000000000000 --- a/spaces/Yusin/ChatGPT-Speech/pygpt.py +++ /dev/null @@ -1,112 +0,0 @@ -import uuid -import asyncio -import socketio -import datetime -import json -import base64 - -class PyGPT: - def __init__(self, session_token, bypass_node='https://gpt.pawan.krd'): - self.ready = False - self.socket = socketio.AsyncClient() - self.socket.on('connect', self.on_connect) - self.socket.on('disconnect', self.on_disconnect) - self.session_token = session_token - self.conversations = [] - self.auth = None - self.expires = datetime.datetime.now() - self.pause_token_checks = False - self.bypass_node = bypass_node - asyncio.create_task(self.cleanup_conversations()) - - async def connect(self): - await self.socket.connect(self.bypass_node) - - async def disconnect(self): - await self.socket.disconnect() - await self.socket.close() - - def on_connect(self): - print('Connected to server') - asyncio.create_task(self.check_tokens()) - - def on_disconnect(self): - print('Disconnected from server') - self.ready = False - - async def check_tokens(self): - while True: - if self.pause_token_checks: - await asyncio.sleep(0.5) - continue - self.pause_token_checks = True - now = datetime.datetime.now() - offset = datetime.timedelta(minutes=2) - if self.expires < (now - offset) or not self.auth: - await self.get_tokens() - self.pause_token_checks = False - await asyncio.sleep(0.5) - - async def cleanup_conversations(self): - while True: - await asyncio.sleep(60) - now = datetime.datetime.now() - self.conversations = [c for c in self.conversations if now - c['last_active'] < datetime.timedelta(minutes=2)] - - def add_conversation(self, id): - conversation = { - 'id': id, - 'conversation_id': None, - 'parent_id': uuid.uuid4(), - 'last_active': datetime.datetime.now() - } - self.conversations.append(conversation) - return conversation - - def get_conversation_by_id(self, id): - conversation = next((c for c in self.conversations if c['id'] == id), None) - if conversation is None: - conversation = self.add_conversation(id) - else: - conversation['last_active'] = datetime.datetime.now() - return conversation - - async def wait_for_ready(self): - while not self.ready: - await asyncio.sleep(0.025) - print('Ready!!') - - async def ask(self, prompt, id='default'): - if not self.auth or not self.validate_token(self.auth): - await self.get_tokens() - conversation = self.get_conversation_by_id(id) - data = await self.socket.call('askQuestion', { - 'prompt': prompt, - 'parentId': str(conversation['parent_id']), - 'conversationId': str(conversation['conversation_id']), - 'auth': self.auth - }) - - if 'error' in data: - print(f'Error: {data["error"]}') - conversation['parent_id'] = data['messageId'] - conversation['conversation_id'] = data['conversationId'] - return data['answer'] - - def validate_token(self, token): - if not token: - return False - parsed = json.loads(base64.b64decode(f'{token.split(".")[1]}==').decode()) - return datetime.datetime.now() <= datetime.datetime.fromtimestamp(parsed['exp']) - - async def get_tokens(self): - await asyncio.sleep(1) - data = await self.socket.call('getSession', self.session_token) - - if 'error' in data: - print(f'Error getting session: {data["error"]}') - else: - self.auth = data['auth'] - self.expires = datetime.datetime.strptime(data['expires'], '%Y-%m-%dT%H:%M:%S.%fZ') - self.session_token = data['sessionToken'] - self.ready = True \ No newline at end of file diff --git a/spaces/aadnk/whisper-webui/src/diarization/diarizationContainer.py b/spaces/aadnk/whisper-webui/src/diarization/diarizationContainer.py deleted file mode 100644 index a4ad44ace823b649b9b2f313de828e89cfdffc1f..0000000000000000000000000000000000000000 --- a/spaces/aadnk/whisper-webui/src/diarization/diarizationContainer.py +++ /dev/null @@ -1,78 +0,0 @@ -from typing import List -from src.diarization.diarization import Diarization, DiarizationEntry -from src.modelCache import GLOBAL_MODEL_CACHE, ModelCache -from src.vadParallel import ParallelContext - -class DiarizationContainer: - def __init__(self, auth_token: str = None, enable_daemon_process: bool = True, auto_cleanup_timeout_seconds=60, cache: ModelCache = None): - self.auth_token = auth_token - self.enable_daemon_process = enable_daemon_process - self.auto_cleanup_timeout_seconds = auto_cleanup_timeout_seconds - self.diarization_context: ParallelContext = None - self.cache = cache - self.model = None - - def run(self, audio_file, **kwargs): - # Create parallel context if needed - if self.diarization_context is None and self.enable_daemon_process: - # Number of processes is set to 1 as we mainly use this in order to clean up GPU memory - self.diarization_context = ParallelContext(num_processes=1, auto_cleanup_timeout_seconds=self.auto_cleanup_timeout_seconds) - print("Created diarization context with auto cleanup timeout of %d seconds" % self.auto_cleanup_timeout_seconds) - - # Run directly - if self.diarization_context is None: - return self.execute(audio_file, **kwargs) - - # Otherwise run in a separate process - pool = self.diarization_context.get_pool() - - try: - result = pool.apply(self.execute, (audio_file,), kwargs) - return result - finally: - self.diarization_context.return_pool(pool) - - def mark_speakers(self, diarization_result: List[DiarizationEntry], whisper_result: dict): - if self.model is not None: - return self.model.mark_speakers(diarization_result, whisper_result) - - # Create a new diarization model (calling mark_speakers will not initialize pyannote.audio) - model = Diarization(self.auth_token) - return model.mark_speakers(diarization_result, whisper_result) - - def get_model(self): - # Lazy load the model - if (self.model is None): - if self.cache: - print("Loading diarization model from cache") - self.model = self.cache.get("diarization", lambda : Diarization(self.auth_token)) - else: - print("Loading diarization model") - self.model = Diarization(self.auth_token) - return self.model - - def execute(self, audio_file, **kwargs): - model = self.get_model() - - # We must use list() here to force the iterator to run, as generators are not picklable - result = list(model.run(audio_file, **kwargs)) - return result - - def cleanup(self): - if self.diarization_context is not None: - self.diarization_context.close() - - def __getstate__(self): - return { - "auth_token": self.auth_token, - "enable_daemon_process": self.enable_daemon_process, - "auto_cleanup_timeout_seconds": self.auto_cleanup_timeout_seconds - } - - def __setstate__(self, state): - self.auth_token = state["auth_token"] - self.enable_daemon_process = state["enable_daemon_process"] - self.auto_cleanup_timeout_seconds = state["auto_cleanup_timeout_seconds"] - self.diarization_context = None - self.cache = GLOBAL_MODEL_CACHE - self.model = None \ No newline at end of file diff --git a/spaces/abbbbbbbbbbbbbb/Arabic_poem_classifier/app.py b/spaces/abbbbbbbbbbbbbb/Arabic_poem_classifier/app.py deleted file mode 100644 index bbf72b782320453cd5d9fb4e7e1ebd99fc972af8..0000000000000000000000000000000000000000 --- a/spaces/abbbbbbbbbbbbbb/Arabic_poem_classifier/app.py +++ /dev/null @@ -1,36 +0,0 @@ -import gradio as gr - -description = "التعرف على خاصيات البيت الشعري" -title = """هذا البرنامج يقوم بالتعرف على مختلف خاصيات البيت من الشعر. -يمكنكم إختيار الخاصية من بين: -- التعرف على البحر -- التعرف على الروي -التعرف على الموضوع-""" - -examples = [["سَلو قَلبي غَداةَ سَلا وَثابا لَعَلَّ عَلى الجَمالِ لَهُ عِتابا"], ["قفا نبك من ذِكرى حبيب ومنزلِ بسِقطِ اللِّوى بينَ الدَّخول فحَوْملِ"]] - - -meter = gr.Interface.load("huggingface/Yah216/Arabic_poem_meter_3", - description="من فضلك، أدخل البيت الشعري الذي تود التعرف عليه", - examples=examples, title = "التعرف على البحر", - inputs = gr.inputs.Textbox(lines = 3, label = "البيت") - -) -rawiy = gr.Interface.load("huggingface/Yah216/Poem_Qafiyah_Detection", - title ="التعرف على الروي", - examples=examples, - description="من فضلك، أدخل البيت الشعري الذي تود التعرف عليه", - inputs = gr.inputs.Textbox(lines = 3, label = "البيت") - -) -subject = gr.Interface.load( - "huggingface/zenkri/autotrain-Arabic_Poetry_by_Subject-920730230", - title="التعرف على الموضوع", - examples=examples, - description="من فضلك، أدخل البيت الشعري الذي تود التعرف عليه", - inputs = gr.inputs.Textbox(lines = 3, label = "البيت") - -) -demo = gr.TabbedInterface([meter, rawiy, subject], ["التعرف على البحر","التعرف على الروي","التعرف على الموضوع"]) -demo.launch() - diff --git a/spaces/abhishek/sketch-to-image/annotator/mlsd/models/mbv2_mlsd_large.py b/spaces/abhishek/sketch-to-image/annotator/mlsd/models/mbv2_mlsd_large.py deleted file mode 100644 index 7122d8ebd9279530b332a0729a9c2bd2aed70fb9..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/mlsd/models/mbv2_mlsd_large.py +++ /dev/null @@ -1,302 +0,0 @@ -''' - * Copyright (c) 2023 Salesforce, Inc. - * All rights reserved. - * SPDX-License-Identifier: Apache License 2.0 - * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/ - * By Can Qin - * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet - * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala -''' - -import os -import sys -import torch -import torch.nn as nn -import torch.utils.model_zoo as model_zoo -from torch.nn import functional as F - - -class BlockTypeA(nn.Module): - def __init__(self, in_c1, in_c2, out_c1, out_c2, upscale = True): - super(BlockTypeA, self).__init__() - self.conv1 = nn.Sequential( - nn.Conv2d(in_c2, out_c2, kernel_size=1), - nn.BatchNorm2d(out_c2), - nn.ReLU(inplace=True) - ) - self.conv2 = nn.Sequential( - nn.Conv2d(in_c1, out_c1, kernel_size=1), - nn.BatchNorm2d(out_c1), - nn.ReLU(inplace=True) - ) - self.upscale = upscale - - def forward(self, a, b): - b = self.conv1(b) - a = self.conv2(a) - if self.upscale: - b = F.interpolate(b, scale_factor=2.0, mode='bilinear', align_corners=True) - return torch.cat((a, b), dim=1) - - -class BlockTypeB(nn.Module): - def __init__(self, in_c, out_c): - super(BlockTypeB, self).__init__() - self.conv1 = nn.Sequential( - nn.Conv2d(in_c, in_c, kernel_size=3, padding=1), - nn.BatchNorm2d(in_c), - nn.ReLU() - ) - self.conv2 = nn.Sequential( - nn.Conv2d(in_c, out_c, kernel_size=3, padding=1), - nn.BatchNorm2d(out_c), - nn.ReLU() - ) - - def forward(self, x): - x = self.conv1(x) + x - x = self.conv2(x) - return x - -class BlockTypeC(nn.Module): - def __init__(self, in_c, out_c): - super(BlockTypeC, self).__init__() - self.conv1 = nn.Sequential( - nn.Conv2d(in_c, in_c, kernel_size=3, padding=5, dilation=5), - nn.BatchNorm2d(in_c), - nn.ReLU() - ) - self.conv2 = nn.Sequential( - nn.Conv2d(in_c, in_c, kernel_size=3, padding=1), - nn.BatchNorm2d(in_c), - nn.ReLU() - ) - self.conv3 = nn.Conv2d(in_c, out_c, kernel_size=1) - - def forward(self, x): - x = self.conv1(x) - x = self.conv2(x) - x = self.conv3(x) - return x - -def _make_divisible(v, divisor, min_value=None): - """ - This function is taken from the original tf repo. - It ensures that all layers have a channel number that is divisible by 8 - It can be seen here: - https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py - :param v: - :param divisor: - :param min_value: - :return: - """ - if min_value is None: - min_value = divisor - new_v = max(min_value, int(v + divisor / 2) // divisor * divisor) - # Make sure that round down does not go down by more than 10%. - if new_v < 0.9 * v: - new_v += divisor - return new_v - - -class ConvBNReLU(nn.Sequential): - def __init__(self, in_planes, out_planes, kernel_size=3, stride=1, groups=1): - self.channel_pad = out_planes - in_planes - self.stride = stride - #padding = (kernel_size - 1) // 2 - - # TFLite uses slightly different padding than PyTorch - if stride == 2: - padding = 0 - else: - padding = (kernel_size - 1) // 2 - - super(ConvBNReLU, self).__init__( - nn.Conv2d(in_planes, out_planes, kernel_size, stride, padding, groups=groups, bias=False), - nn.BatchNorm2d(out_planes), - nn.ReLU6(inplace=True) - ) - self.max_pool = nn.MaxPool2d(kernel_size=stride, stride=stride) - - - def forward(self, x): - # TFLite uses different padding - if self.stride == 2: - x = F.pad(x, (0, 1, 0, 1), "constant", 0) - #print(x.shape) - - for module in self: - if not isinstance(module, nn.MaxPool2d): - x = module(x) - return x - - -class InvertedResidual(nn.Module): - def __init__(self, inp, oup, stride, expand_ratio): - super(InvertedResidual, self).__init__() - self.stride = stride - assert stride in [1, 2] - - hidden_dim = int(round(inp * expand_ratio)) - self.use_res_connect = self.stride == 1 and inp == oup - - layers = [] - if expand_ratio != 1: - # pw - layers.append(ConvBNReLU(inp, hidden_dim, kernel_size=1)) - layers.extend([ - # dw - ConvBNReLU(hidden_dim, hidden_dim, stride=stride, groups=hidden_dim), - # pw-linear - nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False), - nn.BatchNorm2d(oup), - ]) - self.conv = nn.Sequential(*layers) - - def forward(self, x): - if self.use_res_connect: - return x + self.conv(x) - else: - return self.conv(x) - - -class MobileNetV2(nn.Module): - def __init__(self, pretrained=True): - """ - MobileNet V2 main class - Args: - num_classes (int): Number of classes - width_mult (float): Width multiplier - adjusts number of channels in each layer by this amount - inverted_residual_setting: Network structure - round_nearest (int): Round the number of channels in each layer to be a multiple of this number - Set to 1 to turn off rounding - block: Module specifying inverted residual building block for mobilenet - """ - super(MobileNetV2, self).__init__() - - block = InvertedResidual - input_channel = 32 - last_channel = 1280 - width_mult = 1.0 - round_nearest = 8 - - inverted_residual_setting = [ - # t, c, n, s - [1, 16, 1, 1], - [6, 24, 2, 2], - [6, 32, 3, 2], - [6, 64, 4, 2], - [6, 96, 3, 1], - #[6, 160, 3, 2], - #[6, 320, 1, 1], - ] - - # only check the first element, assuming user knows t,c,n,s are required - if len(inverted_residual_setting) == 0 or len(inverted_residual_setting[0]) != 4: - raise ValueError("inverted_residual_setting should be non-empty " - "or a 4-element list, got {}".format(inverted_residual_setting)) - - # building first layer - input_channel = _make_divisible(input_channel * width_mult, round_nearest) - self.last_channel = _make_divisible(last_channel * max(1.0, width_mult), round_nearest) - features = [ConvBNReLU(4, input_channel, stride=2)] - # building inverted residual blocks - for t, c, n, s in inverted_residual_setting: - output_channel = _make_divisible(c * width_mult, round_nearest) - for i in range(n): - stride = s if i == 0 else 1 - features.append(block(input_channel, output_channel, stride, expand_ratio=t)) - input_channel = output_channel - - self.features = nn.Sequential(*features) - self.fpn_selected = [1, 3, 6, 10, 13] - # weight initialization - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out') - if m.bias is not None: - nn.init.zeros_(m.bias) - elif isinstance(m, nn.BatchNorm2d): - nn.init.ones_(m.weight) - nn.init.zeros_(m.bias) - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - nn.init.zeros_(m.bias) - if pretrained: - self._load_pretrained_model() - - def _forward_impl(self, x): - # This exists since TorchScript doesn't support inheritance, so the superclass method - # (this one) needs to have a name other than `forward` that can be accessed in a subclass - fpn_features = [] - for i, f in enumerate(self.features): - if i > self.fpn_selected[-1]: - break - x = f(x) - if i in self.fpn_selected: - fpn_features.append(x) - - c1, c2, c3, c4, c5 = fpn_features - return c1, c2, c3, c4, c5 - - - def forward(self, x): - return self._forward_impl(x) - - def _load_pretrained_model(self): - pretrain_dict = model_zoo.load_url('https://download.pytorch.org/models/mobilenet_v2-b0353104.pth') - model_dict = {} - state_dict = self.state_dict() - for k, v in pretrain_dict.items(): - if k in state_dict: - model_dict[k] = v - state_dict.update(model_dict) - self.load_state_dict(state_dict) - - -class MobileV2_MLSD_Large(nn.Module): - def __init__(self): - super(MobileV2_MLSD_Large, self).__init__() - - self.backbone = MobileNetV2(pretrained=False) - ## A, B - self.block15 = BlockTypeA(in_c1= 64, in_c2= 96, - out_c1= 64, out_c2=64, - upscale=False) - self.block16 = BlockTypeB(128, 64) - - ## A, B - self.block17 = BlockTypeA(in_c1 = 32, in_c2 = 64, - out_c1= 64, out_c2= 64) - self.block18 = BlockTypeB(128, 64) - - ## A, B - self.block19 = BlockTypeA(in_c1=24, in_c2=64, - out_c1=64, out_c2=64) - self.block20 = BlockTypeB(128, 64) - - ## A, B, C - self.block21 = BlockTypeA(in_c1=16, in_c2=64, - out_c1=64, out_c2=64) - self.block22 = BlockTypeB(128, 64) - - self.block23 = BlockTypeC(64, 16) - - def forward(self, x): - c1, c2, c3, c4, c5 = self.backbone(x) - - x = self.block15(c4, c5) - x = self.block16(x) - - x = self.block17(c3, x) - x = self.block18(x) - - x = self.block19(c2, x) - x = self.block20(x) - - x = self.block21(c1, x) - x = self.block22(x) - x = self.block23(x) - x = x[:, 7:, :, :] - - return x \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/parallel/collate.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/parallel/collate.py deleted file mode 100644 index ad749197df21b0d74297548be5f66a696adebf7f..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/parallel/collate.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections.abc import Mapping, Sequence - -import torch -import torch.nn.functional as F -from torch.utils.data.dataloader import default_collate - -from .data_container import DataContainer - - -def collate(batch, samples_per_gpu=1): - """Puts each data field into a tensor/DataContainer with outer dimension - batch size. - - Extend default_collate to add support for - :type:`~mmcv.parallel.DataContainer`. There are 3 cases. - - 1. cpu_only = True, e.g., meta data - 2. cpu_only = False, stack = True, e.g., images tensors - 3. cpu_only = False, stack = False, e.g., gt bboxes - """ - - if not isinstance(batch, Sequence): - raise TypeError(f'{batch.dtype} is not supported.') - - if isinstance(batch[0], DataContainer): - stacked = [] - if batch[0].cpu_only: - for i in range(0, len(batch), samples_per_gpu): - stacked.append( - [sample.data for sample in batch[i:i + samples_per_gpu]]) - return DataContainer( - stacked, batch[0].stack, batch[0].padding_value, cpu_only=True) - elif batch[0].stack: - for i in range(0, len(batch), samples_per_gpu): - assert isinstance(batch[i].data, torch.Tensor) - - if batch[i].pad_dims is not None: - ndim = batch[i].dim() - assert ndim > batch[i].pad_dims - max_shape = [0 for _ in range(batch[i].pad_dims)] - for dim in range(1, batch[i].pad_dims + 1): - max_shape[dim - 1] = batch[i].size(-dim) - for sample in batch[i:i + samples_per_gpu]: - for dim in range(0, ndim - batch[i].pad_dims): - assert batch[i].size(dim) == sample.size(dim) - for dim in range(1, batch[i].pad_dims + 1): - max_shape[dim - 1] = max(max_shape[dim - 1], - sample.size(-dim)) - padded_samples = [] - for sample in batch[i:i + samples_per_gpu]: - pad = [0 for _ in range(batch[i].pad_dims * 2)] - for dim in range(1, batch[i].pad_dims + 1): - pad[2 * dim - - 1] = max_shape[dim - 1] - sample.size(-dim) - padded_samples.append( - F.pad( - sample.data, pad, value=sample.padding_value)) - stacked.append(default_collate(padded_samples)) - elif batch[i].pad_dims is None: - stacked.append( - default_collate([ - sample.data - for sample in batch[i:i + samples_per_gpu] - ])) - else: - raise ValueError( - 'pad_dims should be either None or integers (1-3)') - - else: - for i in range(0, len(batch), samples_per_gpu): - stacked.append( - [sample.data for sample in batch[i:i + samples_per_gpu]]) - return DataContainer(stacked, batch[0].stack, batch[0].padding_value) - elif isinstance(batch[0], Sequence): - transposed = zip(*batch) - return [collate(samples, samples_per_gpu) for samples in transposed] - elif isinstance(batch[0], Mapping): - return { - key: collate([d[key] for d in batch], samples_per_gpu) - for key in batch[0] - } - else: - return default_collate(batch) diff --git a/spaces/abidlabs/call-sentiment-blocks-2/README.md b/spaces/abidlabs/call-sentiment-blocks-2/README.md deleted file mode 100644 index 05fdf048eb772fd35341f6c05496ec6d7ee748c2..0000000000000000000000000000000000000000 --- a/spaces/abidlabs/call-sentiment-blocks-2/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Call Sentiment Blocks 2 -emoji: 🐠 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 2.9b23 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/abidlabs/speech-translation/app.py b/spaces/abidlabs/speech-translation/app.py deleted file mode 100644 index 959bae81ec3d493eea79b12f685e90d67cb62be2..0000000000000000000000000000000000000000 --- a/spaces/abidlabs/speech-translation/app.py +++ /dev/null @@ -1,91 +0,0 @@ -import gradio as gr -import librosa -from transformers import AutoFeatureExtractor, AutoTokenizer, SpeechEncoderDecoderModel -import torch - -model_name = "facebook/wav2vec2-xls-r-2b-22-to-16" -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -feature_extractor = AutoFeatureExtractor.from_pretrained(model_name) -tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) -model = SpeechEncoderDecoderModel.from_pretrained(model_name).to(device) - -if torch.cuda.is_available(): - model.half() - -def process_audio_file(file): - data, sr = librosa.load(file) - if sr != 16000: - data = librosa.resample(data, sr, 16000) - print(data.shape) - input_values = feature_extractor(data, return_tensors="pt").input_values.to(device) - - if torch.cuda.is_available(): - input_values = input_values.to(torch.float16) - return input_values - -def transcribe(file_mic, target_language): - - target_code = target_language.split("(")[-1].split(")")[0] - forced_bos_token_id = MAPPING[target_code] - - input_values = process_audio_file(file_mic) - - sequences = model.generate(input_values, forced_bos_token_id=forced_bos_token_id) - - transcription = tokenizer.batch_decode(sequences, skip_special_tokens=True) - return warn_output + transcription[0] - -target_language = [ - "English (en)", - "German (de)", - "Turkish (tr)", - "Persian (fa)", - "Swedish (sv)", - "Mongolian (mn)", - "Chinese (zh)", - "Welsh (cy)", - "Catalan (ca)", - "Slovenian (sl)", - "Estonian (et)", - "Indonesian (id)", - "Arabic (ar)", - "Tamil (ta)", - "Latvian (lv)", - "Japanese (ja)", -] - -MAPPING = { - "en": 250004, - "de": 250003, - "tr": 250023, - "fa": 250029, - "sv": 250042, - "mn": 250037, - "zh": 250025, - "cy": 250007, - "ca": 250005, - "sl": 250052, - "et": 250006, - "id": 250032, - "ar": 250001, - "ta": 250044, - "lv": 250017, - "ja": 250012, -} - -iface = gr.Interface( - fn=transcribe, - inputs=[ - gr.inputs.Audio(source="microphone", type='filepath'), - gr.inputs.Dropdown(target_language), - ], - outputs="text", - layout="horizontal", - theme="default", - description="A simple interface to translate from 22 input spoken languages to 16 written languages built by Meta/Facebook AI.", - enable_queue=True, - allow_flagging=False, -) - -iface.launch() diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/libs/egl/egl.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/libs/egl/egl.py deleted file mode 100644 index 2bd4808c080912201829e0e80db48e947d32d145..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/libs/egl/egl.py +++ /dev/null @@ -1,568 +0,0 @@ -"""Wrapper for /usr/include/EGL/egl - -Generated with: -wrap.py -o lib_egl.py /usr/include/EGL/egl.h - -Do not modify this file. -""" -from ctypes import * - -import pyglet.lib - -_lib = pyglet.lib.load_library('EGL') - - -__egl_h_ = 1 # /usr/include/EGL/egl.h:2 -EGL_EGL_PROTOTYPES = 1 # /usr/include/EGL/egl.h:42 -EGL_VERSION_1_0 = 1 # /usr/include/EGL/egl.h:57 -EGLBoolean = c_uint # /usr/include/EGL/egl.h:58 -EGLDisplay = POINTER(None) # /usr/include/EGL/egl.h:59 -EGLConfig = POINTER(None) # /usr/include/EGL/egl.h:62 -EGLSurface = POINTER(None) # /usr/include/EGL/egl.h:63 -EGLContext = POINTER(None) # /usr/include/EGL/egl.h:64 -__eglMustCastToProperFunctionPointerType = CFUNCTYPE(None) # /usr/include/EGL/egl.h:65 -EGL_ALPHA_SIZE = 12321 # /usr/include/EGL/egl.h:66 -EGL_BAD_ACCESS = 12290 # /usr/include/EGL/egl.h:67 -EGL_BAD_ALLOC = 12291 # /usr/include/EGL/egl.h:68 -EGL_BAD_ATTRIBUTE = 12292 # /usr/include/EGL/egl.h:69 -EGL_BAD_CONFIG = 12293 # /usr/include/EGL/egl.h:70 -EGL_BAD_CONTEXT = 12294 # /usr/include/EGL/egl.h:71 -EGL_BAD_CURRENT_SURFACE = 12295 # /usr/include/EGL/egl.h:72 -EGL_BAD_DISPLAY = 12296 # /usr/include/EGL/egl.h:73 -EGL_BAD_MATCH = 12297 # /usr/include/EGL/egl.h:74 -EGL_BAD_NATIVE_PIXMAP = 12298 # /usr/include/EGL/egl.h:75 -EGL_BAD_NATIVE_WINDOW = 12299 # /usr/include/EGL/egl.h:76 -EGL_BAD_PARAMETER = 12300 # /usr/include/EGL/egl.h:77 -EGL_BAD_SURFACE = 12301 # /usr/include/EGL/egl.h:78 -EGL_BLUE_SIZE = 12322 # /usr/include/EGL/egl.h:79 -EGL_BUFFER_SIZE = 12320 # /usr/include/EGL/egl.h:80 -EGL_CONFIG_CAVEAT = 12327 # /usr/include/EGL/egl.h:81 -EGL_CONFIG_ID = 12328 # /usr/include/EGL/egl.h:82 -EGL_CORE_NATIVE_ENGINE = 12379 # /usr/include/EGL/egl.h:83 -EGL_DEPTH_SIZE = 12325 # /usr/include/EGL/egl.h:84 -EGL_DRAW = 12377 # /usr/include/EGL/egl.h:86 -EGL_EXTENSIONS = 12373 # /usr/include/EGL/egl.h:87 -EGL_FALSE = 0 # /usr/include/EGL/egl.h:88 -EGL_GREEN_SIZE = 12323 # /usr/include/EGL/egl.h:89 -EGL_HEIGHT = 12374 # /usr/include/EGL/egl.h:90 -EGL_LARGEST_PBUFFER = 12376 # /usr/include/EGL/egl.h:91 -EGL_LEVEL = 12329 # /usr/include/EGL/egl.h:92 -EGL_MAX_PBUFFER_HEIGHT = 12330 # /usr/include/EGL/egl.h:93 -EGL_MAX_PBUFFER_PIXELS = 12331 # /usr/include/EGL/egl.h:94 -EGL_MAX_PBUFFER_WIDTH = 12332 # /usr/include/EGL/egl.h:95 -EGL_NATIVE_RENDERABLE = 12333 # /usr/include/EGL/egl.h:96 -EGL_NATIVE_VISUAL_ID = 12334 # /usr/include/EGL/egl.h:97 -EGL_NATIVE_VISUAL_TYPE = 12335 # /usr/include/EGL/egl.h:98 -EGL_NONE = 12344 # /usr/include/EGL/egl.h:99 -EGL_NON_CONFORMANT_CONFIG = 12369 # /usr/include/EGL/egl.h:100 -EGL_NOT_INITIALIZED = 12289 # /usr/include/EGL/egl.h:101 -EGL_PBUFFER_BIT = 1 # /usr/include/EGL/egl.h:105 -EGL_PIXMAP_BIT = 2 # /usr/include/EGL/egl.h:106 -EGL_READ = 12378 # /usr/include/EGL/egl.h:107 -EGL_RED_SIZE = 12324 # /usr/include/EGL/egl.h:108 -EGL_SAMPLES = 12337 # /usr/include/EGL/egl.h:109 -EGL_SAMPLE_BUFFERS = 12338 # /usr/include/EGL/egl.h:110 -EGL_SLOW_CONFIG = 12368 # /usr/include/EGL/egl.h:111 -EGL_STENCIL_SIZE = 12326 # /usr/include/EGL/egl.h:112 -EGL_SUCCESS = 12288 # /usr/include/EGL/egl.h:113 -EGL_SURFACE_TYPE = 12339 # /usr/include/EGL/egl.h:114 -EGL_TRANSPARENT_BLUE_VALUE = 12341 # /usr/include/EGL/egl.h:115 -EGL_TRANSPARENT_GREEN_VALUE = 12342 # /usr/include/EGL/egl.h:116 -EGL_TRANSPARENT_RED_VALUE = 12343 # /usr/include/EGL/egl.h:117 -EGL_TRANSPARENT_RGB = 12370 # /usr/include/EGL/egl.h:118 -EGL_TRANSPARENT_TYPE = 12340 # /usr/include/EGL/egl.h:119 -EGL_TRUE = 1 # /usr/include/EGL/egl.h:120 -EGL_VENDOR = 12371 # /usr/include/EGL/egl.h:121 -EGL_VERSION = 12372 # /usr/include/EGL/egl.h:122 -EGL_WIDTH = 12375 # /usr/include/EGL/egl.h:123 -EGL_WINDOW_BIT = 4 # /usr/include/EGL/egl.h:124 -khronos_int32_t = c_int32 # /usr/include/KHR/khrplatform.h:150 -EGLint = khronos_int32_t # /usr/include/EGL/eglplatform.h:166 -PFNEGLCHOOSECONFIGPROC = CFUNCTYPE(EGLBoolean, EGLDisplay, POINTER(EGLint), POINTER(EGLConfig), EGLint, POINTER(EGLint)) # /usr/include/EGL/egl.h:125 -XID = c_ulong # /usr/include/X11/X.h:66 -Pixmap = XID # /usr/include/X11/X.h:102 -EGLNativePixmapType = Pixmap # /usr/include/EGL/eglplatform.h:132 -PFNEGLCOPYBUFFERSPROC = CFUNCTYPE(EGLBoolean, EGLDisplay, EGLSurface, EGLNativePixmapType) # /usr/include/EGL/egl.h:126 -PFNEGLCREATECONTEXTPROC = CFUNCTYPE(EGLContext, EGLDisplay, EGLConfig, EGLContext, POINTER(EGLint)) # /usr/include/EGL/egl.h:127 -PFNEGLCREATEPBUFFERSURFACEPROC = CFUNCTYPE(EGLSurface, EGLDisplay, EGLConfig, POINTER(EGLint)) # /usr/include/EGL/egl.h:128 -PFNEGLCREATEPIXMAPSURFACEPROC = CFUNCTYPE(EGLSurface, EGLDisplay, EGLConfig, EGLNativePixmapType, POINTER(EGLint)) # /usr/include/EGL/egl.h:129 -Window = XID # /usr/include/X11/X.h:96 -EGLNativeWindowType = Window # /usr/include/EGL/eglplatform.h:133 -PFNEGLCREATEWINDOWSURFACEPROC = CFUNCTYPE(EGLSurface, EGLDisplay, EGLConfig, EGLNativeWindowType, POINTER(EGLint)) # /usr/include/EGL/egl.h:130 -PFNEGLDESTROYCONTEXTPROC = CFUNCTYPE(EGLBoolean, EGLDisplay, EGLContext) # /usr/include/EGL/egl.h:131 -PFNEGLDESTROYSURFACEPROC = CFUNCTYPE(EGLBoolean, EGLDisplay, EGLSurface) # /usr/include/EGL/egl.h:132 -PFNEGLGETCONFIGATTRIBPROC = CFUNCTYPE(EGLBoolean, EGLDisplay, EGLConfig, EGLint, POINTER(EGLint)) # /usr/include/EGL/egl.h:133 -PFNEGLGETCONFIGSPROC = CFUNCTYPE(EGLBoolean, EGLDisplay, POINTER(EGLConfig), EGLint, POINTER(EGLint)) # /usr/include/EGL/egl.h:134 -PFNEGLGETCURRENTDISPLAYPROC = CFUNCTYPE(EGLDisplay) # /usr/include/EGL/egl.h:135 -PFNEGLGETCURRENTSURFACEPROC = CFUNCTYPE(EGLSurface, EGLint) # /usr/include/EGL/egl.h:136 - -class struct__XDisplay(Structure): - __slots__ = [ - ] -struct__XDisplay._fields_ = [ - ('_opaque_struct', c_int) -] - -Display = struct__XDisplay # /usr/include/X11/Xlib.h:487 -EGLNativeDisplayType = POINTER(Display) # /usr/include/EGL/eglplatform.h:131 -PFNEGLGETDISPLAYPROC = CFUNCTYPE(EGLDisplay, EGLNativeDisplayType) # /usr/include/EGL/egl.h:137 -PFNEGLGETERRORPROC = CFUNCTYPE(EGLint) # /usr/include/EGL/egl.h:138 -PFNEGLGETPROCADDRESSPROC = CFUNCTYPE(__eglMustCastToProperFunctionPointerType, c_char_p) # /usr/include/EGL/egl.h:139 -PFNEGLINITIALIZEPROC = CFUNCTYPE(EGLBoolean, EGLDisplay, POINTER(EGLint), POINTER(EGLint)) # /usr/include/EGL/egl.h:140 -PFNEGLMAKECURRENTPROC = CFUNCTYPE(EGLBoolean, EGLDisplay, EGLSurface, EGLSurface, EGLContext) # /usr/include/EGL/egl.h:141 -PFNEGLQUERYCONTEXTPROC = CFUNCTYPE(EGLBoolean, EGLDisplay, EGLContext, EGLint, POINTER(EGLint)) # /usr/include/EGL/egl.h:142 -PFNEGLQUERYSTRINGPROC = CFUNCTYPE(c_char_p, EGLDisplay, EGLint) # /usr/include/EGL/egl.h:143 -PFNEGLQUERYSURFACEPROC = CFUNCTYPE(EGLBoolean, EGLDisplay, EGLSurface, EGLint, POINTER(EGLint)) # /usr/include/EGL/egl.h:144 -PFNEGLSWAPBUFFERSPROC = CFUNCTYPE(EGLBoolean, EGLDisplay, EGLSurface) # /usr/include/EGL/egl.h:145 -PFNEGLTERMINATEPROC = CFUNCTYPE(EGLBoolean, EGLDisplay) # /usr/include/EGL/egl.h:146 -PFNEGLWAITGLPROC = CFUNCTYPE(EGLBoolean) # /usr/include/EGL/egl.h:147 -PFNEGLWAITNATIVEPROC = CFUNCTYPE(EGLBoolean, EGLint) # /usr/include/EGL/egl.h:148 -# /usr/include/EGL/egl.h:150 -eglChooseConfig = _lib.eglChooseConfig -eglChooseConfig.restype = EGLBoolean -eglChooseConfig.argtypes = [EGLDisplay, POINTER(EGLint), POINTER(EGLConfig), EGLint, POINTER(EGLint)] - -# /usr/include/EGL/egl.h:151 -eglCopyBuffers = _lib.eglCopyBuffers -eglCopyBuffers.restype = EGLBoolean -eglCopyBuffers.argtypes = [EGLDisplay, EGLSurface, EGLNativePixmapType] - -# /usr/include/EGL/egl.h:152 -eglCreateContext = _lib.eglCreateContext -eglCreateContext.restype = EGLContext -eglCreateContext.argtypes = [EGLDisplay, EGLConfig, EGLContext, POINTER(EGLint)] - -# /usr/include/EGL/egl.h:153 -eglCreatePbufferSurface = _lib.eglCreatePbufferSurface -eglCreatePbufferSurface.restype = EGLSurface -eglCreatePbufferSurface.argtypes = [EGLDisplay, EGLConfig, POINTER(EGLint)] - -# /usr/include/EGL/egl.h:154 -eglCreatePixmapSurface = _lib.eglCreatePixmapSurface -eglCreatePixmapSurface.restype = EGLSurface -eglCreatePixmapSurface.argtypes = [EGLDisplay, EGLConfig, EGLNativePixmapType, POINTER(EGLint)] - -# /usr/include/EGL/egl.h:155 -eglCreateWindowSurface = _lib.eglCreateWindowSurface -eglCreateWindowSurface.restype = EGLSurface -eglCreateWindowSurface.argtypes = [EGLDisplay, EGLConfig, EGLNativeWindowType, POINTER(EGLint)] - -# /usr/include/EGL/egl.h:156 -eglDestroyContext = _lib.eglDestroyContext -eglDestroyContext.restype = EGLBoolean -eglDestroyContext.argtypes = [EGLDisplay, EGLContext] - -# /usr/include/EGL/egl.h:157 -eglDestroySurface = _lib.eglDestroySurface -eglDestroySurface.restype = EGLBoolean -eglDestroySurface.argtypes = [EGLDisplay, EGLSurface] - -# /usr/include/EGL/egl.h:158 -eglGetConfigAttrib = _lib.eglGetConfigAttrib -eglGetConfigAttrib.restype = EGLBoolean -eglGetConfigAttrib.argtypes = [EGLDisplay, EGLConfig, EGLint, POINTER(EGLint)] - -# /usr/include/EGL/egl.h:159 -eglGetConfigs = _lib.eglGetConfigs -eglGetConfigs.restype = EGLBoolean -eglGetConfigs.argtypes = [EGLDisplay, POINTER(EGLConfig), EGLint, POINTER(EGLint)] - -# /usr/include/EGL/egl.h:160 -eglGetCurrentDisplay = _lib.eglGetCurrentDisplay -eglGetCurrentDisplay.restype = EGLDisplay -eglGetCurrentDisplay.argtypes = [] - -# /usr/include/EGL/egl.h:161 -eglGetCurrentSurface = _lib.eglGetCurrentSurface -eglGetCurrentSurface.restype = EGLSurface -eglGetCurrentSurface.argtypes = [EGLint] - -# /usr/include/EGL/egl.h:162 -eglGetDisplay = _lib.eglGetDisplay -eglGetDisplay.restype = EGLDisplay -eglGetDisplay.argtypes = [EGLNativeDisplayType] - -# /usr/include/EGL/egl.h:163 -eglGetError = _lib.eglGetError -eglGetError.restype = EGLint -eglGetError.argtypes = [] - -# /usr/include/EGL/egl.h:164 -eglGetProcAddress = _lib.eglGetProcAddress -eglGetProcAddress.restype = __eglMustCastToProperFunctionPointerType -eglGetProcAddress.argtypes = [c_char_p] - -# /usr/include/EGL/egl.h:165 -eglInitialize = _lib.eglInitialize -eglInitialize.restype = EGLBoolean -eglInitialize.argtypes = [EGLDisplay, POINTER(EGLint), POINTER(EGLint)] - -# /usr/include/EGL/egl.h:166 -eglMakeCurrent = _lib.eglMakeCurrent -eglMakeCurrent.restype = EGLBoolean -eglMakeCurrent.argtypes = [EGLDisplay, EGLSurface, EGLSurface, EGLContext] - -# /usr/include/EGL/egl.h:167 -eglQueryContext = _lib.eglQueryContext -eglQueryContext.restype = EGLBoolean -eglQueryContext.argtypes = [EGLDisplay, EGLContext, EGLint, POINTER(EGLint)] - -# /usr/include/EGL/egl.h:168 -eglQueryString = _lib.eglQueryString -eglQueryString.restype = c_char_p -eglQueryString.argtypes = [EGLDisplay, EGLint] - -# /usr/include/EGL/egl.h:169 -eglQuerySurface = _lib.eglQuerySurface -eglQuerySurface.restype = EGLBoolean -eglQuerySurface.argtypes = [EGLDisplay, EGLSurface, EGLint, POINTER(EGLint)] - -# /usr/include/EGL/egl.h:170 -eglSwapBuffers = _lib.eglSwapBuffers -eglSwapBuffers.restype = EGLBoolean -eglSwapBuffers.argtypes = [EGLDisplay, EGLSurface] - -# /usr/include/EGL/egl.h:171 -eglTerminate = _lib.eglTerminate -eglTerminate.restype = EGLBoolean -eglTerminate.argtypes = [EGLDisplay] - -# /usr/include/EGL/egl.h:172 -eglWaitGL = _lib.eglWaitGL -eglWaitGL.restype = EGLBoolean -eglWaitGL.argtypes = [] - -# /usr/include/EGL/egl.h:173 -eglWaitNative = _lib.eglWaitNative -eglWaitNative.restype = EGLBoolean -eglWaitNative.argtypes = [EGLint] - -EGL_VERSION_1_1 = 1 # /usr/include/EGL/egl.h:178 -EGL_BACK_BUFFER = 12420 # /usr/include/EGL/egl.h:179 -EGL_BIND_TO_TEXTURE_RGB = 12345 # /usr/include/EGL/egl.h:180 -EGL_BIND_TO_TEXTURE_RGBA = 12346 # /usr/include/EGL/egl.h:181 -EGL_CONTEXT_LOST = 12302 # /usr/include/EGL/egl.h:182 -EGL_MIN_SWAP_INTERVAL = 12347 # /usr/include/EGL/egl.h:183 -EGL_MAX_SWAP_INTERVAL = 12348 # /usr/include/EGL/egl.h:184 -EGL_MIPMAP_TEXTURE = 12418 # /usr/include/EGL/egl.h:185 -EGL_MIPMAP_LEVEL = 12419 # /usr/include/EGL/egl.h:186 -EGL_NO_TEXTURE = 12380 # /usr/include/EGL/egl.h:187 -EGL_TEXTURE_2D = 12383 # /usr/include/EGL/egl.h:188 -EGL_TEXTURE_FORMAT = 12416 # /usr/include/EGL/egl.h:189 -EGL_TEXTURE_RGB = 12381 # /usr/include/EGL/egl.h:190 -EGL_TEXTURE_RGBA = 12382 # /usr/include/EGL/egl.h:191 -EGL_TEXTURE_TARGET = 12417 # /usr/include/EGL/egl.h:192 -PFNEGLBINDTEXIMAGEPROC = CFUNCTYPE(EGLBoolean, EGLDisplay, EGLSurface, EGLint) # /usr/include/EGL/egl.h:193 -PFNEGLRELEASETEXIMAGEPROC = CFUNCTYPE(EGLBoolean, EGLDisplay, EGLSurface, EGLint) # /usr/include/EGL/egl.h:194 -PFNEGLSURFACEATTRIBPROC = CFUNCTYPE(EGLBoolean, EGLDisplay, EGLSurface, EGLint, EGLint) # /usr/include/EGL/egl.h:195 -PFNEGLSWAPINTERVALPROC = CFUNCTYPE(EGLBoolean, EGLDisplay, EGLint) # /usr/include/EGL/egl.h:196 -# /usr/include/EGL/egl.h:198 -eglBindTexImage = _lib.eglBindTexImage -eglBindTexImage.restype = EGLBoolean -eglBindTexImage.argtypes = [EGLDisplay, EGLSurface, EGLint] - -# /usr/include/EGL/egl.h:199 -eglReleaseTexImage = _lib.eglReleaseTexImage -eglReleaseTexImage.restype = EGLBoolean -eglReleaseTexImage.argtypes = [EGLDisplay, EGLSurface, EGLint] - -# /usr/include/EGL/egl.h:200 -eglSurfaceAttrib = _lib.eglSurfaceAttrib -eglSurfaceAttrib.restype = EGLBoolean -eglSurfaceAttrib.argtypes = [EGLDisplay, EGLSurface, EGLint, EGLint] - -# /usr/include/EGL/egl.h:201 -eglSwapInterval = _lib.eglSwapInterval -eglSwapInterval.restype = EGLBoolean -eglSwapInterval.argtypes = [EGLDisplay, EGLint] - -EGL_VERSION_1_2 = 1 # /usr/include/EGL/egl.h:206 -EGLenum = c_uint # /usr/include/EGL/egl.h:207 -EGLClientBuffer = POINTER(None) # /usr/include/EGL/egl.h:208 -EGL_ALPHA_FORMAT = 12424 # /usr/include/EGL/egl.h:209 -EGL_ALPHA_FORMAT_NONPRE = 12427 # /usr/include/EGL/egl.h:210 -EGL_ALPHA_FORMAT_PRE = 12428 # /usr/include/EGL/egl.h:211 -EGL_ALPHA_MASK_SIZE = 12350 # /usr/include/EGL/egl.h:212 -EGL_BUFFER_PRESERVED = 12436 # /usr/include/EGL/egl.h:213 -EGL_BUFFER_DESTROYED = 12437 # /usr/include/EGL/egl.h:214 -EGL_CLIENT_APIS = 12429 # /usr/include/EGL/egl.h:215 -EGL_COLORSPACE = 12423 # /usr/include/EGL/egl.h:216 -EGL_COLORSPACE_sRGB = 12425 # /usr/include/EGL/egl.h:217 -EGL_COLORSPACE_LINEAR = 12426 # /usr/include/EGL/egl.h:218 -EGL_COLOR_BUFFER_TYPE = 12351 # /usr/include/EGL/egl.h:219 -EGL_CONTEXT_CLIENT_TYPE = 12439 # /usr/include/EGL/egl.h:220 -EGL_DISPLAY_SCALING = 10000 # /usr/include/EGL/egl.h:221 -EGL_HORIZONTAL_RESOLUTION = 12432 # /usr/include/EGL/egl.h:222 -EGL_LUMINANCE_BUFFER = 12431 # /usr/include/EGL/egl.h:223 -EGL_LUMINANCE_SIZE = 12349 # /usr/include/EGL/egl.h:224 -EGL_OPENGL_ES_BIT = 1 # /usr/include/EGL/egl.h:225 -EGL_OPENVG_BIT = 2 # /usr/include/EGL/egl.h:226 -EGL_OPENGL_ES_API = 12448 # /usr/include/EGL/egl.h:227 -EGL_OPENVG_API = 12449 # /usr/include/EGL/egl.h:228 -EGL_OPENVG_IMAGE = 12438 # /usr/include/EGL/egl.h:229 -EGL_PIXEL_ASPECT_RATIO = 12434 # /usr/include/EGL/egl.h:230 -EGL_RENDERABLE_TYPE = 12352 # /usr/include/EGL/egl.h:231 -EGL_RENDER_BUFFER = 12422 # /usr/include/EGL/egl.h:232 -EGL_RGB_BUFFER = 12430 # /usr/include/EGL/egl.h:233 -EGL_SINGLE_BUFFER = 12421 # /usr/include/EGL/egl.h:234 -EGL_SWAP_BEHAVIOR = 12435 # /usr/include/EGL/egl.h:235 -EGL_VERTICAL_RESOLUTION = 12433 # /usr/include/EGL/egl.h:237 -PFNEGLBINDAPIPROC = CFUNCTYPE(EGLBoolean, EGLenum) # /usr/include/EGL/egl.h:238 -PFNEGLQUERYAPIPROC = CFUNCTYPE(EGLenum) # /usr/include/EGL/egl.h:239 -PFNEGLCREATEPBUFFERFROMCLIENTBUFFERPROC = CFUNCTYPE(EGLSurface, EGLDisplay, EGLenum, EGLClientBuffer, EGLConfig, POINTER(EGLint)) # /usr/include/EGL/egl.h:240 -PFNEGLRELEASETHREADPROC = CFUNCTYPE(EGLBoolean) # /usr/include/EGL/egl.h:241 -PFNEGLWAITCLIENTPROC = CFUNCTYPE(EGLBoolean) # /usr/include/EGL/egl.h:242 -# /usr/include/EGL/egl.h:244 -eglBindAPI = _lib.eglBindAPI -eglBindAPI.restype = EGLBoolean -eglBindAPI.argtypes = [EGLenum] - -# /usr/include/EGL/egl.h:245 -eglQueryAPI = _lib.eglQueryAPI -eglQueryAPI.restype = EGLenum -eglQueryAPI.argtypes = [] - -# /usr/include/EGL/egl.h:246 -eglCreatePbufferFromClientBuffer = _lib.eglCreatePbufferFromClientBuffer -eglCreatePbufferFromClientBuffer.restype = EGLSurface -eglCreatePbufferFromClientBuffer.argtypes = [EGLDisplay, EGLenum, EGLClientBuffer, EGLConfig, POINTER(EGLint)] - -# /usr/include/EGL/egl.h:247 -eglReleaseThread = _lib.eglReleaseThread -eglReleaseThread.restype = EGLBoolean -eglReleaseThread.argtypes = [] - -# /usr/include/EGL/egl.h:248 -eglWaitClient = _lib.eglWaitClient -eglWaitClient.restype = EGLBoolean -eglWaitClient.argtypes = [] - -EGL_VERSION_1_3 = 1 # /usr/include/EGL/egl.h:253 -EGL_CONFORMANT = 12354 # /usr/include/EGL/egl.h:254 -EGL_CONTEXT_CLIENT_VERSION = 12440 # /usr/include/EGL/egl.h:255 -EGL_MATCH_NATIVE_PIXMAP = 12353 # /usr/include/EGL/egl.h:256 -EGL_OPENGL_ES2_BIT = 4 # /usr/include/EGL/egl.h:257 -EGL_VG_ALPHA_FORMAT = 12424 # /usr/include/EGL/egl.h:258 -EGL_VG_ALPHA_FORMAT_NONPRE = 12427 # /usr/include/EGL/egl.h:259 -EGL_VG_ALPHA_FORMAT_PRE = 12428 # /usr/include/EGL/egl.h:260 -EGL_VG_ALPHA_FORMAT_PRE_BIT = 64 # /usr/include/EGL/egl.h:261 -EGL_VG_COLORSPACE = 12423 # /usr/include/EGL/egl.h:262 -EGL_VG_COLORSPACE_sRGB = 12425 # /usr/include/EGL/egl.h:263 -EGL_VG_COLORSPACE_LINEAR = 12426 # /usr/include/EGL/egl.h:264 -EGL_VG_COLORSPACE_LINEAR_BIT = 32 # /usr/include/EGL/egl.h:265 -EGL_VERSION_1_4 = 1 # /usr/include/EGL/egl.h:269 -EGL_MULTISAMPLE_RESOLVE_BOX_BIT = 512 # /usr/include/EGL/egl.h:271 -EGL_MULTISAMPLE_RESOLVE = 12441 # /usr/include/EGL/egl.h:272 -EGL_MULTISAMPLE_RESOLVE_DEFAULT = 12442 # /usr/include/EGL/egl.h:273 -EGL_MULTISAMPLE_RESOLVE_BOX = 12443 # /usr/include/EGL/egl.h:274 -EGL_OPENGL_API = 12450 # /usr/include/EGL/egl.h:275 -EGL_OPENGL_BIT = 8 # /usr/include/EGL/egl.h:276 -EGL_SWAP_BEHAVIOR_PRESERVED_BIT = 1024 # /usr/include/EGL/egl.h:277 -PFNEGLGETCURRENTCONTEXTPROC = CFUNCTYPE(EGLContext) # /usr/include/EGL/egl.h:278 -# /usr/include/EGL/egl.h:280 -eglGetCurrentContext = _lib.eglGetCurrentContext -eglGetCurrentContext.restype = EGLContext -eglGetCurrentContext.argtypes = [] - -EGL_VERSION_1_5 = 1 # /usr/include/EGL/egl.h:285 -EGLSync = POINTER(None) # /usr/include/EGL/egl.h:286 -intptr_t = c_long # /usr/include/stdint.h:87 -EGLAttrib = intptr_t # /usr/include/EGL/egl.h:287 -khronos_uint64_t = c_uint64 # /usr/include/KHR/khrplatform.h:153 -khronos_utime_nanoseconds_t = khronos_uint64_t # /usr/include/KHR/khrplatform.h:267 -EGLTime = khronos_utime_nanoseconds_t # /usr/include/EGL/egl.h:288 -EGLImage = POINTER(None) # /usr/include/EGL/egl.h:289 -EGL_CONTEXT_MAJOR_VERSION = 12440 # /usr/include/EGL/egl.h:290 -EGL_CONTEXT_MINOR_VERSION = 12539 # /usr/include/EGL/egl.h:291 -EGL_CONTEXT_OPENGL_PROFILE_MASK = 12541 # /usr/include/EGL/egl.h:292 -EGL_CONTEXT_OPENGL_RESET_NOTIFICATION_STRATEGY = 12733 # /usr/include/EGL/egl.h:293 -EGL_NO_RESET_NOTIFICATION = 12734 # /usr/include/EGL/egl.h:294 -EGL_LOSE_CONTEXT_ON_RESET = 12735 # /usr/include/EGL/egl.h:295 -EGL_CONTEXT_OPENGL_CORE_PROFILE_BIT = 1 # /usr/include/EGL/egl.h:296 -EGL_CONTEXT_OPENGL_COMPATIBILITY_PROFILE_BIT = 2 # /usr/include/EGL/egl.h:297 -EGL_CONTEXT_OPENGL_DEBUG = 12720 # /usr/include/EGL/egl.h:298 -EGL_CONTEXT_OPENGL_FORWARD_COMPATIBLE = 12721 # /usr/include/EGL/egl.h:299 -EGL_CONTEXT_OPENGL_ROBUST_ACCESS = 12722 # /usr/include/EGL/egl.h:300 -EGL_OPENGL_ES3_BIT = 64 # /usr/include/EGL/egl.h:301 -EGL_CL_EVENT_HANDLE = 12444 # /usr/include/EGL/egl.h:302 -EGL_SYNC_CL_EVENT = 12542 # /usr/include/EGL/egl.h:303 -EGL_SYNC_CL_EVENT_COMPLETE = 12543 # /usr/include/EGL/egl.h:304 -EGL_SYNC_PRIOR_COMMANDS_COMPLETE = 12528 # /usr/include/EGL/egl.h:305 -EGL_SYNC_TYPE = 12535 # /usr/include/EGL/egl.h:306 -EGL_SYNC_STATUS = 12529 # /usr/include/EGL/egl.h:307 -EGL_SYNC_CONDITION = 12536 # /usr/include/EGL/egl.h:308 -EGL_SIGNALED = 12530 # /usr/include/EGL/egl.h:309 -EGL_UNSIGNALED = 12531 # /usr/include/EGL/egl.h:310 -EGL_SYNC_FLUSH_COMMANDS_BIT = 1 # /usr/include/EGL/egl.h:311 -EGL_FOREVER = 18446744073709551615 # /usr/include/EGL/egl.h:312 -EGL_TIMEOUT_EXPIRED = 12533 # /usr/include/EGL/egl.h:313 -EGL_CONDITION_SATISFIED = 12534 # /usr/include/EGL/egl.h:314 -EGL_SYNC_FENCE = 12537 # /usr/include/EGL/egl.h:316 -EGL_GL_COLORSPACE = 12445 # /usr/include/EGL/egl.h:317 -EGL_GL_COLORSPACE_SRGB = 12425 # /usr/include/EGL/egl.h:318 -EGL_GL_COLORSPACE_LINEAR = 12426 # /usr/include/EGL/egl.h:319 -EGL_GL_RENDERBUFFER = 12473 # /usr/include/EGL/egl.h:320 -EGL_GL_TEXTURE_2D = 12465 # /usr/include/EGL/egl.h:321 -EGL_GL_TEXTURE_LEVEL = 12476 # /usr/include/EGL/egl.h:322 -EGL_GL_TEXTURE_3D = 12466 # /usr/include/EGL/egl.h:323 -EGL_GL_TEXTURE_ZOFFSET = 12477 # /usr/include/EGL/egl.h:324 -EGL_GL_TEXTURE_CUBE_MAP_POSITIVE_X = 12467 # /usr/include/EGL/egl.h:325 -EGL_GL_TEXTURE_CUBE_MAP_NEGATIVE_X = 12468 # /usr/include/EGL/egl.h:326 -EGL_GL_TEXTURE_CUBE_MAP_POSITIVE_Y = 12469 # /usr/include/EGL/egl.h:327 -EGL_GL_TEXTURE_CUBE_MAP_NEGATIVE_Y = 12470 # /usr/include/EGL/egl.h:328 -EGL_GL_TEXTURE_CUBE_MAP_POSITIVE_Z = 12471 # /usr/include/EGL/egl.h:329 -EGL_GL_TEXTURE_CUBE_MAP_NEGATIVE_Z = 12472 # /usr/include/EGL/egl.h:330 -EGL_IMAGE_PRESERVED = 12498 # /usr/include/EGL/egl.h:331 -PFNEGLCREATESYNCPROC = CFUNCTYPE(EGLSync, EGLDisplay, EGLenum, POINTER(EGLAttrib)) # /usr/include/EGL/egl.h:333 -PFNEGLDESTROYSYNCPROC = CFUNCTYPE(EGLBoolean, EGLDisplay, EGLSync) # /usr/include/EGL/egl.h:334 -PFNEGLCLIENTWAITSYNCPROC = CFUNCTYPE(EGLint, EGLDisplay, EGLSync, EGLint, EGLTime) # /usr/include/EGL/egl.h:335 -PFNEGLGETSYNCATTRIBPROC = CFUNCTYPE(EGLBoolean, EGLDisplay, EGLSync, EGLint, POINTER(EGLAttrib)) # /usr/include/EGL/egl.h:336 -PFNEGLCREATEIMAGEPROC = CFUNCTYPE(EGLImage, EGLDisplay, EGLContext, EGLenum, EGLClientBuffer, POINTER(EGLAttrib)) # /usr/include/EGL/egl.h:337 -PFNEGLDESTROYIMAGEPROC = CFUNCTYPE(EGLBoolean, EGLDisplay, EGLImage) # /usr/include/EGL/egl.h:338 -PFNEGLGETPLATFORMDISPLAYPROC = CFUNCTYPE(EGLDisplay, EGLenum, POINTER(None), POINTER(EGLAttrib)) # /usr/include/EGL/egl.h:339 -PFNEGLCREATEPLATFORMWINDOWSURFACEPROC = CFUNCTYPE(EGLSurface, EGLDisplay, EGLConfig, POINTER(None), POINTER(EGLAttrib)) # /usr/include/EGL/egl.h:340 -PFNEGLCREATEPLATFORMPIXMAPSURFACEPROC = CFUNCTYPE(EGLSurface, EGLDisplay, EGLConfig, POINTER(None), POINTER(EGLAttrib)) # /usr/include/EGL/egl.h:341 -PFNEGLWAITSYNCPROC = CFUNCTYPE(EGLBoolean, EGLDisplay, EGLSync, EGLint) # /usr/include/EGL/egl.h:342 -# /usr/include/EGL/egl.h:344 -eglCreateSync = _lib.eglCreateSync -eglCreateSync.restype = EGLSync -eglCreateSync.argtypes = [EGLDisplay, EGLenum, POINTER(EGLAttrib)] - -# /usr/include/EGL/egl.h:345 -eglDestroySync = _lib.eglDestroySync -eglDestroySync.restype = EGLBoolean -eglDestroySync.argtypes = [EGLDisplay, EGLSync] - -# /usr/include/EGL/egl.h:346 -eglClientWaitSync = _lib.eglClientWaitSync -eglClientWaitSync.restype = EGLint -eglClientWaitSync.argtypes = [EGLDisplay, EGLSync, EGLint, EGLTime] - -# /usr/include/EGL/egl.h:347 -eglGetSyncAttrib = _lib.eglGetSyncAttrib -eglGetSyncAttrib.restype = EGLBoolean -eglGetSyncAttrib.argtypes = [EGLDisplay, EGLSync, EGLint, POINTER(EGLAttrib)] - -# /usr/include/EGL/egl.h:348 -eglCreateImage = _lib.eglCreateImage -eglCreateImage.restype = EGLImage -eglCreateImage.argtypes = [EGLDisplay, EGLContext, EGLenum, EGLClientBuffer, POINTER(EGLAttrib)] - -# /usr/include/EGL/egl.h:349 -eglDestroyImage = _lib.eglDestroyImage -eglDestroyImage.restype = EGLBoolean -eglDestroyImage.argtypes = [EGLDisplay, EGLImage] - -# /usr/include/EGL/egl.h:350 -eglGetPlatformDisplay = _lib.eglGetPlatformDisplay -eglGetPlatformDisplay.restype = EGLDisplay -eglGetPlatformDisplay.argtypes = [EGLenum, POINTER(None), POINTER(EGLAttrib)] - -# /usr/include/EGL/egl.h:351 -eglCreatePlatformWindowSurface = _lib.eglCreatePlatformWindowSurface -eglCreatePlatformWindowSurface.restype = EGLSurface -eglCreatePlatformWindowSurface.argtypes = [EGLDisplay, EGLConfig, POINTER(None), POINTER(EGLAttrib)] - -# /usr/include/EGL/egl.h:352 -eglCreatePlatformPixmapSurface = _lib.eglCreatePlatformPixmapSurface -eglCreatePlatformPixmapSurface.restype = EGLSurface -eglCreatePlatformPixmapSurface.argtypes = [EGLDisplay, EGLConfig, POINTER(None), POINTER(EGLAttrib)] - -# /usr/include/EGL/egl.h:353 -eglWaitSync = _lib.eglWaitSync -eglWaitSync.restype = EGLBoolean -eglWaitSync.argtypes = [EGLDisplay, EGLSync, EGLint] - - -__all__ = ['__egl_h_', 'EGL_EGL_PROTOTYPES', 'EGL_VERSION_1_0', 'EGLBoolean', 'EGLint', -'EGLDisplay', 'EGLConfig', 'EGLSurface', 'EGLContext', -'__eglMustCastToProperFunctionPointerType', 'EGL_ALPHA_SIZE', -'EGL_BAD_ACCESS', 'EGL_BAD_ALLOC', 'EGL_BAD_ATTRIBUTE', 'EGL_BAD_CONFIG', -'EGL_BAD_CONTEXT', 'EGL_BAD_CURRENT_SURFACE', 'EGL_BAD_DISPLAY', -'EGL_BAD_MATCH', 'EGL_BAD_NATIVE_PIXMAP', 'EGL_BAD_NATIVE_WINDOW', -'EGL_BAD_PARAMETER', 'EGL_BAD_SURFACE', 'EGL_BLUE_SIZE', 'EGL_BUFFER_SIZE', -'EGL_CONFIG_CAVEAT', 'EGL_CONFIG_ID', 'EGL_CORE_NATIVE_ENGINE', -'EGL_DEPTH_SIZE', 'EGL_DRAW', 'EGL_EXTENSIONS', 'EGL_FALSE', 'EGL_GREEN_SIZE', -'EGL_HEIGHT', 'EGL_LARGEST_PBUFFER', 'EGL_LEVEL', 'EGL_MAX_PBUFFER_HEIGHT', -'EGL_MAX_PBUFFER_PIXELS', 'EGL_MAX_PBUFFER_WIDTH', 'EGL_NATIVE_RENDERABLE', -'EGL_NATIVE_VISUAL_ID', 'EGL_NATIVE_VISUAL_TYPE', 'EGL_NONE', -'EGL_NON_CONFORMANT_CONFIG', 'EGL_NOT_INITIALIZED', 'EGL_PBUFFER_BIT', -'EGL_PIXMAP_BIT', 'EGL_READ', 'EGL_RED_SIZE', 'EGL_SAMPLES', -'EGL_SAMPLE_BUFFERS', 'EGL_SLOW_CONFIG', 'EGL_STENCIL_SIZE', 'EGL_SUCCESS', -'EGL_SURFACE_TYPE', 'EGL_TRANSPARENT_BLUE_VALUE', -'EGL_TRANSPARENT_GREEN_VALUE', 'EGL_TRANSPARENT_RED_VALUE', -'EGL_TRANSPARENT_RGB', 'EGL_TRANSPARENT_TYPE', 'EGL_TRUE', 'EGL_VENDOR', -'EGL_VERSION', 'EGL_WIDTH', 'EGL_WINDOW_BIT', 'PFNEGLCHOOSECONFIGPROC', -'PFNEGLCOPYBUFFERSPROC', 'PFNEGLCREATECONTEXTPROC', -'PFNEGLCREATEPBUFFERSURFACEPROC', 'PFNEGLCREATEPIXMAPSURFACEPROC', -'PFNEGLCREATEWINDOWSURFACEPROC', 'PFNEGLDESTROYCONTEXTPROC', -'PFNEGLDESTROYSURFACEPROC', 'PFNEGLGETCONFIGATTRIBPROC', -'PFNEGLGETCONFIGSPROC', 'PFNEGLGETCURRENTDISPLAYPROC', -'PFNEGLGETCURRENTSURFACEPROC', 'PFNEGLGETDISPLAYPROC', 'PFNEGLGETERRORPROC', -'PFNEGLGETPROCADDRESSPROC', 'PFNEGLINITIALIZEPROC', 'PFNEGLMAKECURRENTPROC', -'PFNEGLQUERYCONTEXTPROC', 'PFNEGLQUERYSTRINGPROC', 'PFNEGLQUERYSURFACEPROC', -'PFNEGLSWAPBUFFERSPROC', 'PFNEGLTERMINATEPROC', 'PFNEGLWAITGLPROC', -'PFNEGLWAITNATIVEPROC', 'eglChooseConfig', 'eglCopyBuffers', -'eglCreateContext', 'eglCreatePbufferSurface', 'eglCreatePixmapSurface', -'eglCreateWindowSurface', 'eglDestroyContext', 'eglDestroySurface', -'eglGetConfigAttrib', 'eglGetConfigs', 'eglGetCurrentDisplay', -'eglGetCurrentSurface', 'eglGetDisplay', 'eglGetError', 'eglGetProcAddress', -'eglInitialize', 'eglMakeCurrent', 'eglQueryContext', 'eglQueryString', -'eglQuerySurface', 'eglSwapBuffers', 'eglTerminate', 'eglWaitGL', -'eglWaitNative', 'EGL_VERSION_1_1', 'EGL_BACK_BUFFER', -'EGL_BIND_TO_TEXTURE_RGB', 'EGL_BIND_TO_TEXTURE_RGBA', 'EGL_CONTEXT_LOST', -'EGL_MIN_SWAP_INTERVAL', 'EGL_MAX_SWAP_INTERVAL', 'EGL_MIPMAP_TEXTURE', -'EGL_MIPMAP_LEVEL', 'EGL_NO_TEXTURE', 'EGL_TEXTURE_2D', 'EGL_TEXTURE_FORMAT', -'EGL_TEXTURE_RGB', 'EGL_TEXTURE_RGBA', 'EGL_TEXTURE_TARGET', -'PFNEGLBINDTEXIMAGEPROC', 'PFNEGLRELEASETEXIMAGEPROC', -'PFNEGLSURFACEATTRIBPROC', 'PFNEGLSWAPINTERVALPROC', 'eglBindTexImage', -'eglReleaseTexImage', 'eglSurfaceAttrib', 'eglSwapInterval', -'EGL_VERSION_1_2', 'EGLenum', 'EGLClientBuffer', 'EGL_ALPHA_FORMAT', -'EGL_ALPHA_FORMAT_NONPRE', 'EGL_ALPHA_FORMAT_PRE', 'EGL_ALPHA_MASK_SIZE', -'EGL_BUFFER_PRESERVED', 'EGL_BUFFER_DESTROYED', 'EGL_CLIENT_APIS', -'EGL_COLORSPACE', 'EGL_COLORSPACE_sRGB', 'EGL_COLORSPACE_LINEAR', -'EGL_COLOR_BUFFER_TYPE', 'EGL_CONTEXT_CLIENT_TYPE', 'EGL_DISPLAY_SCALING', -'EGL_HORIZONTAL_RESOLUTION', 'EGL_LUMINANCE_BUFFER', 'EGL_LUMINANCE_SIZE', -'EGL_OPENGL_ES_BIT', 'EGL_OPENVG_BIT', 'EGL_OPENGL_ES_API', 'EGL_OPENVG_API', -'EGL_OPENVG_IMAGE', 'EGL_PIXEL_ASPECT_RATIO', 'EGL_RENDERABLE_TYPE', -'EGL_RENDER_BUFFER', 'EGL_RGB_BUFFER', 'EGL_SINGLE_BUFFER', -'EGL_SWAP_BEHAVIOR', 'EGL_VERTICAL_RESOLUTION', 'PFNEGLBINDAPIPROC', -'PFNEGLQUERYAPIPROC', 'PFNEGLCREATEPBUFFERFROMCLIENTBUFFERPROC', -'PFNEGLRELEASETHREADPROC', 'PFNEGLWAITCLIENTPROC', 'eglBindAPI', -'eglQueryAPI', 'eglCreatePbufferFromClientBuffer', 'eglReleaseThread', -'eglWaitClient', 'EGL_VERSION_1_3', 'EGL_CONFORMANT', -'EGL_CONTEXT_CLIENT_VERSION', 'EGL_MATCH_NATIVE_PIXMAP', 'EGL_OPENGL_ES2_BIT', -'EGL_VG_ALPHA_FORMAT', 'EGL_VG_ALPHA_FORMAT_NONPRE', -'EGL_VG_ALPHA_FORMAT_PRE', 'EGL_VG_ALPHA_FORMAT_PRE_BIT', 'EGL_VG_COLORSPACE', -'EGL_VG_COLORSPACE_sRGB', 'EGL_VG_COLORSPACE_LINEAR', -'EGL_VG_COLORSPACE_LINEAR_BIT', 'EGL_VERSION_1_4', -'EGL_MULTISAMPLE_RESOLVE_BOX_BIT', 'EGL_MULTISAMPLE_RESOLVE', -'EGL_MULTISAMPLE_RESOLVE_DEFAULT', 'EGL_MULTISAMPLE_RESOLVE_BOX', -'EGL_OPENGL_API', 'EGL_OPENGL_BIT', 'EGL_SWAP_BEHAVIOR_PRESERVED_BIT', -'PFNEGLGETCURRENTCONTEXTPROC', 'eglGetCurrentContext', 'EGL_VERSION_1_5', -'EGLSync', 'EGLAttrib', 'EGLTime', 'EGLImage', 'EGL_CONTEXT_MAJOR_VERSION', -'EGL_CONTEXT_MINOR_VERSION', 'EGL_CONTEXT_OPENGL_PROFILE_MASK', -'EGL_CONTEXT_OPENGL_RESET_NOTIFICATION_STRATEGY', 'EGL_NO_RESET_NOTIFICATION', -'EGL_LOSE_CONTEXT_ON_RESET', 'EGL_CONTEXT_OPENGL_CORE_PROFILE_BIT', -'EGL_CONTEXT_OPENGL_COMPATIBILITY_PROFILE_BIT', 'EGL_CONTEXT_OPENGL_DEBUG', -'EGL_CONTEXT_OPENGL_FORWARD_COMPATIBLE', 'EGL_CONTEXT_OPENGL_ROBUST_ACCESS', -'EGL_OPENGL_ES3_BIT', 'EGL_CL_EVENT_HANDLE', 'EGL_SYNC_CL_EVENT', -'EGL_SYNC_CL_EVENT_COMPLETE', 'EGL_SYNC_PRIOR_COMMANDS_COMPLETE', -'EGL_SYNC_TYPE', 'EGL_SYNC_STATUS', 'EGL_SYNC_CONDITION', 'EGL_SIGNALED', -'EGL_UNSIGNALED', 'EGL_SYNC_FLUSH_COMMANDS_BIT', 'EGL_FOREVER', -'EGL_TIMEOUT_EXPIRED', 'EGL_CONDITION_SATISFIED', 'EGL_SYNC_FENCE', -'EGL_GL_COLORSPACE', 'EGL_GL_COLORSPACE_SRGB', 'EGL_GL_COLORSPACE_LINEAR', -'EGL_GL_RENDERBUFFER', 'EGL_GL_TEXTURE_2D', 'EGL_GL_TEXTURE_LEVEL', -'EGL_GL_TEXTURE_3D', 'EGL_GL_TEXTURE_ZOFFSET', -'EGL_GL_TEXTURE_CUBE_MAP_POSITIVE_X', 'EGL_GL_TEXTURE_CUBE_MAP_NEGATIVE_X', -'EGL_GL_TEXTURE_CUBE_MAP_POSITIVE_Y', 'EGL_GL_TEXTURE_CUBE_MAP_NEGATIVE_Y', -'EGL_GL_TEXTURE_CUBE_MAP_POSITIVE_Z', 'EGL_GL_TEXTURE_CUBE_MAP_NEGATIVE_Z', -'EGL_IMAGE_PRESERVED', 'PFNEGLCREATESYNCPROC', 'PFNEGLDESTROYSYNCPROC', -'PFNEGLCLIENTWAITSYNCPROC', 'PFNEGLGETSYNCATTRIBPROC', -'PFNEGLCREATEIMAGEPROC', 'PFNEGLDESTROYIMAGEPROC', -'PFNEGLGETPLATFORMDISPLAYPROC', 'PFNEGLCREATEPLATFORMWINDOWSURFACEPROC', -'PFNEGLCREATEPLATFORMPIXMAPSURFACEPROC', 'PFNEGLWAITSYNCPROC', -'eglCreateSync', 'eglDestroySync', 'eglClientWaitSync', 'eglGetSyncAttrib', -'eglCreateImage', 'eglDestroyImage', 'eglGetPlatformDisplay', -'eglCreatePlatformWindowSurface', 'eglCreatePlatformPixmapSurface', -'eglWaitSync'] diff --git a/spaces/akhaliq/Car_Keypoints/app.py b/spaces/akhaliq/Car_Keypoints/app.py deleted file mode 100644 index 62e55eb70509f042691fffcdf2440d3d6adb4fae..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Car_Keypoints/app.py +++ /dev/null @@ -1,23 +0,0 @@ -import gradio as gr -import os - -def inference(image): - os.system("""python -m openpifpaf.predict """+ image.name+""" \ - --checkpoint=shufflenetv2k16-apollo-24 -o out.jpg \ - --instance-threshold 0.05 --seed-threshold 0.05 \ - --line-width 4 --font-size 0""") - return "out.jpg" -title = "OpenPifPaf" -description = "Gradio demo for OpenPifPaf: Composite Fields for Semantic Keypoint Detection and Spatio-Temporal Association. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below." -article = "

    OpenPifPaf: Composite Fields for Semantic Keypoint Detection and Spatio-Temporal Association | Github Repo

    " -examples=[['cars.png']] -gr.Interface( - inference, - gr.inputs.Image(type="file", label="Input"), - gr.outputs.Image(type="file", label="Output"), - title=title, - description=description, - article=article, - enable_queue=True, - examples=examples - ).launch(debug=True) \ No newline at end of file diff --git a/spaces/akhaliq/SummerTime/download.py b/spaces/akhaliq/SummerTime/download.py deleted file mode 100644 index 3f59569e354853f0961315d42da1ab3226a96884..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/SummerTime/download.py +++ /dev/null @@ -1,3 +0,0 @@ -import nltk - -nltk.download("stopwords") diff --git a/spaces/ali-ghamdan/image-colors-corrector/evaluation/calc_deltaE2000.py b/spaces/ali-ghamdan/image-colors-corrector/evaluation/calc_deltaE2000.py deleted file mode 100644 index 9c954bad475ad89b8e3233a46a4056342a0e4431..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/image-colors-corrector/evaluation/calc_deltaE2000.py +++ /dev/null @@ -1,96 +0,0 @@ -## Calculate mean Delta 2000 between source and target images. -# -# Copyright (c) 2018-present, Mahmoud Afifi -# York University, Canada -# mafifi@eecs.yorku.ca | m.3afifi@gmail.com -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# All rights reserved. -# -# Please cite the following work if this program is used: -# Mahmoud Afifi, Brian Price, Scott Cohen, and Michael S. Brown, -# "When color constancy goes wrong: Correcting improperly white-balanced -# images", CVPR 2019. -# -########################################################################## - - -import cv2 -import numpy as np -from skimage import color - - -def calc_deltaE2000(source, target, color_chart_area): - source = cv2.cvtColor(source, cv2.COLOR_BGR2RGB) - target = cv2.cvtColor(target, cv2.COLOR_BGR2RGB) - source = color.rgb2lab(source) - target = color.rgb2lab(target) - source = np.reshape(source, [-1, 3]).astype(np.float32) - target = np.reshape(target, [-1, 3]).astype(np.float32) - deltaE00 = deltaE2000(source, target) - return sum(deltaE00) / (np.shape(deltaE00)[0] - color_chart_area) - - -def deltaE2000(Labstd, Labsample): - kl = 1 - kc = 1 - kh = 1 - Lstd = np.transpose(Labstd[:, 0]) - astd = np.transpose(Labstd[:, 1]) - bstd = np.transpose(Labstd[:, 2]) - Cabstd = np.sqrt(np.power(astd, 2) + np.power(bstd, 2)) - Lsample = np.transpose(Labsample[:, 0]) - asample = np.transpose(Labsample[:, 1]) - bsample = np.transpose(Labsample[:, 2]) - Cabsample = np.sqrt(np.power(asample, 2) + np.power(bsample, 2)) - Cabarithmean = (Cabstd + Cabsample) / 2 - G = 0.5 * (1 - np.sqrt((np.power(Cabarithmean, 7)) / (np.power( - Cabarithmean, 7) + np.power(25, 7)))) - apstd = (1 + G) * astd - apsample = (1 + G) * asample - Cpsample = np.sqrt(np.power(apsample, 2) + np.power(bsample, 2)) - Cpstd = np.sqrt(np.power(apstd, 2) + np.power(bstd, 2)) - Cpprod = (Cpsample * Cpstd) - zcidx = np.argwhere(Cpprod == 0) - hpstd = np.arctan2(bstd, apstd) - hpstd[np.argwhere((np.abs(apstd) + np.abs(bstd)) == 0)] = 0 - hpsample = np.arctan2(bsample, apsample) - hpsample = hpsample + 2 * np.pi * (hpsample < 0) - hpsample[np.argwhere((np.abs(apsample) + np.abs(bsample)) == 0)] = 0 - dL = (Lsample - Lstd) - dC = (Cpsample - Cpstd) - dhp = (hpsample - hpstd) - dhp = dhp - 2 * np.pi * (dhp > np.pi) - dhp = dhp + 2 * np.pi * (dhp < (-np.pi)) - dhp[zcidx] = 0 - dH = 2 * np.sqrt(Cpprod) * np.sin(dhp / 2) - Lp = (Lsample + Lstd) / 2 - Cp = (Cpstd + Cpsample) / 2 - hp = (hpstd + hpsample) / 2 - hp = hp - (np.abs(hpstd - hpsample) > np.pi) * np.pi - hp = hp + (hp < 0) * 2 * np.pi - hp[zcidx] = hpsample[zcidx] + hpstd[zcidx] - Lpm502 = np.power((Lp - 50), 2) - Sl = 1 + 0.015 * Lpm502 / np.sqrt(20 + Lpm502) - Sc = 1 + 0.045 * Cp - T = 1 - 0.17 * np.cos(hp - np.pi / 6) + 0.24 * np.cos(2 * hp) + \ - 0.32 * np.cos(3 * hp + np.pi / 30) \ - - 0.20 * np.cos(4 * hp - 63 * np.pi / 180) - Sh = 1 + 0.015 * Cp * T - delthetarad = (30 * np.pi / 180) * np.exp( - - np.power((180 / np.pi * hp - 275) / 25, 2)) - Rc = 2 * np.sqrt((np.power(Cp, 7)) / (np.power(Cp, 7) + np.power(25, 7))) - RT = - np.sin(2 * delthetarad) * Rc - klSl = kl * Sl - kcSc = kc * Sc - khSh = kh * Sh - de00 = np.sqrt(np.power((dL / klSl), 2) + np.power((dC / kcSc), 2) + - np.power((dH / khSh), 2) + RT * (dC / kcSc) * (dH / khSh)) - return de00 - -################################################# -# References: -# [1] The CIEDE2000 Color-Difference Formula: Implementation Notes, -# Supplementary Test Data, and Mathematical Observations,", G. Sharma, -# W. Wu, E. N. Dalal, Color Research and Application, 2005. diff --git a/spaces/allknowingroger/Image-Models-Test65/README.md b/spaces/allknowingroger/Image-Models-Test65/README.md deleted file mode 100644 index 537033afd70f76e7022120774e984998b76c0406..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test65/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Image Models -emoji: 👀 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: true -duplicated_from: allknowingroger/Image-Models-Test64 ---- - - \ No newline at end of file diff --git a/spaces/antonovmaxim/text-generation-webui-space/extensions/superbooga/script.py b/spaces/antonovmaxim/text-generation-webui-space/extensions/superbooga/script.py deleted file mode 100644 index a1d66add9945a9cc300345c0e3cb3f0360c04362..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/extensions/superbooga/script.py +++ /dev/null @@ -1,249 +0,0 @@ -import logging -import re -import textwrap - -import gradio as gr -from bs4 import BeautifulSoup -from modules import chat, shared - -from .chromadb import add_chunks_to_collector, make_collector -from .download_urls import download_urls - - -params = { - 'chunk_count': 5, - 'chunk_length': 700, - 'chunk_separator': '', - 'strong_cleanup': False, - 'threads': 4, -} - -collector = make_collector() -chat_collector = make_collector() -chunk_count = 5 - - -def feed_data_into_collector(corpus, chunk_len, chunk_sep): - global collector - - # Defining variables - chunk_len = int(chunk_len) - chunk_sep = chunk_sep.replace(r'\n', '\n') - cumulative = '' - - # Breaking the data into chunks and adding those to the db - cumulative += "Breaking the input dataset...\n\n" - yield cumulative - if chunk_sep: - data_chunks = corpus.split(chunk_sep) - data_chunks = [[data_chunk[i:i + chunk_len] for i in range(0, len(data_chunk), chunk_len)] for data_chunk in data_chunks] - data_chunks = [x for y in data_chunks for x in y] - else: - data_chunks = [corpus[i:i + chunk_len] for i in range(0, len(corpus), chunk_len)] - cumulative += f"{len(data_chunks)} chunks have been found.\n\nAdding the chunks to the database...\n\n" - yield cumulative - add_chunks_to_collector(data_chunks, collector) - cumulative += "Done." - yield cumulative - - -def feed_file_into_collector(file, chunk_len, chunk_sep): - yield 'Reading the input dataset...\n\n' - text = file.decode('utf-8') - for i in feed_data_into_collector(text, chunk_len, chunk_sep): - yield i - - -def feed_url_into_collector(urls, chunk_len, chunk_sep, strong_cleanup, threads): - all_text = '' - cumulative = '' - - urls = urls.strip().split('\n') - cumulative += f'Loading {len(urls)} URLs with {threads} threads...\n\n' - yield cumulative - for update, contents in download_urls(urls, threads=threads): - yield cumulative + update - - cumulative += 'Processing the HTML sources...' - yield cumulative - for content in contents: - soup = BeautifulSoup(content, features="html.parser") - for script in soup(["script", "style"]): - script.extract() - - strings = soup.stripped_strings - if strong_cleanup: - strings = [s for s in strings if re.search("[A-Za-z] ", s)] - - text = '\n'.join([s.strip() for s in strings]) - all_text += text - - for i in feed_data_into_collector(all_text, chunk_len, chunk_sep): - yield i - - -def apply_settings(_chunk_count): - global chunk_count - chunk_count = int(_chunk_count) - settings_to_display = { - 'chunk_count': chunk_count, - } - - yield f"The following settings are now active: {str(settings_to_display)}" - - -def custom_generate_chat_prompt(user_input, state, **kwargs): - global chat_collector - - if state['mode'] == 'instruct': - results = collector.get_sorted(user_input, n_results=chunk_count) - additional_context = '\nYour reply should be based on the context below:\n\n' + '\n'.join(results) - user_input += additional_context - else: - - def make_single_exchange(id_): - output = '' - output += f"{state['name1']}: {shared.history['internal'][id_][0]}\n" - output += f"{state['name2']}: {shared.history['internal'][id_][1]}\n" - return output - - if len(shared.history['internal']) > chunk_count and user_input != '': - chunks = [] - hist_size = len(shared.history['internal']) - for i in range(hist_size-1): - chunks.append(make_single_exchange(i)) - - add_chunks_to_collector(chunks, chat_collector) - query = '\n'.join(shared.history['internal'][-1] + [user_input]) - try: - best_ids = chat_collector.get_ids_sorted(query, n_results=chunk_count) - additional_context = '\n' - for id_ in best_ids: - if shared.history['internal'][id_][0] != '<|BEGIN-VISIBLE-CHAT|>': - additional_context += make_single_exchange(id_) - - logging.warning(f'Adding the following new context:\n{additional_context}') - state['context'] = state['context'].strip() + '\n' + additional_context - state['history'] = [shared.history['internal'][i] for i in range(hist_size) if i not in best_ids] - except RuntimeError: - logging.error("Couldn't query the database, moving on...") - - return chat.generate_chat_prompt(user_input, state, **kwargs) - - -def remove_special_tokens(string): - pattern = r'(<\|begin-user-input\|>|<\|end-user-input\|>|<\|injection-point\|>)' - return re.sub(pattern, '', string) - - -def input_modifier(string): - if shared.is_chat(): - return string - - # Find the user input - pattern = re.compile(r"<\|begin-user-input\|>(.*?)<\|end-user-input\|>", re.DOTALL) - match = re.search(pattern, string) - if match: - user_input = match.group(1).strip() - - # Get the most similar chunks - results = collector.get_sorted(user_input, n_results=chunk_count) - - # Make the injection - string = string.replace('<|injection-point|>', '\n'.join(results)) - - return remove_special_tokens(string) - - -def ui(): - with gr.Accordion("Click for more information...", open=False): - gr.Markdown(textwrap.dedent(""" - - ## About - - This extension takes a dataset as input, breaks it into chunks, and adds the result to a local/offline Chroma database. - - The database is then queried during inference time to get the excerpts that are closest to your input. The idea is to create an arbitrarily large pseudo context. - - The core methodology was developed and contributed by kaiokendev, who is working on improvements to the method in this repository: https://github.com/kaiokendev/superbig - - ## Data input - - Start by entering some data in the interface below and then clicking on "Load data". - - Each time you load some new data, the old chunks are discarded. - - ## Chat mode - - #### Instruct - - On each turn, the chunks will be compared to your current input and the most relevant matches will be appended to the input in the following format: - - ``` - Consider the excerpts below as additional context: - ... - ``` - - The injection doesn't make it into the chat history. It is only used in the current generation. - - #### Regular chat - - The chunks from the external data sources are ignored, and the chroma database is built based on the chat history instead. The most relevant past exchanges relative to the present input are added to the context string. This way, the extension acts as a long term memory. - - ## Notebook/default modes - - Your question must be manually specified between `<|begin-user-input|>` and `<|end-user-input|>` tags, and the injection point must be specified with `<|injection-point|>`. - - The special tokens mentioned above (`<|begin-user-input|>`, `<|end-user-input|>`, and `<|injection-point|>`) are removed in the background before the text generation begins. - - Here is an example in Vicuna 1.1 format: - - ``` - A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. - - USER: - - <|begin-user-input|> - What datasets are mentioned in the text below? - <|end-user-input|> - - <|injection-point|> - - ASSISTANT: - ``` - - ⚠️ For best results, make sure to remove the spaces and new line characters after `ASSISTANT:`. - - *This extension is currently experimental and under development.* - - """)) - - with gr.Row(): - with gr.Column(min_width=600): - with gr.Tab("Text input"): - data_input = gr.Textbox(lines=20, label='Input data') - update_data = gr.Button('Load data') - - with gr.Tab("URL input"): - url_input = gr.Textbox(lines=10, label='Input URLs', info='Enter one or more URLs separated by newline characters.') - strong_cleanup = gr.Checkbox(value=params['strong_cleanup'], label='Strong cleanup', info='Only keeps html elements that look like long-form text.') - threads = gr.Number(value=params['threads'], label='Threads', info='The number of threads to use while downloading the URLs.', precision=0) - update_url = gr.Button('Load data') - - with gr.Tab("File input"): - file_input = gr.File(label='Input file', type='binary') - update_file = gr.Button('Load data') - - with gr.Tab("Generation settings"): - chunk_count = gr.Number(value=params['chunk_count'], label='Chunk count', info='The number of closest-matching chunks to include in the prompt.') - update_settings = gr.Button('Apply changes') - - chunk_len = gr.Number(value=params['chunk_length'], label='Chunk length', info='In characters, not tokens. This value is used when you click on "Load data".') - chunk_sep = gr.Textbox(value=params['chunk_separator'], label='Chunk separator', info='Used to manually split chunks. Manually split chunks longer than chunk length are split again. This value is used when you click on "Load data".') - with gr.Column(): - last_updated = gr.Markdown() - - update_data.click(feed_data_into_collector, [data_input, chunk_len, chunk_sep], last_updated, show_progress=False) - update_url.click(feed_url_into_collector, [url_input, chunk_len, chunk_sep, strong_cleanup, threads], last_updated, show_progress=False) - update_file.click(feed_file_into_collector, [file_input, chunk_len, chunk_sep], last_updated, show_progress=False) - update_settings.click(apply_settings, [chunk_count], last_updated, show_progress=False) diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/encoder/utils/generic_utils.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/encoder/utils/generic_utils.py deleted file mode 100644 index 1da029611b5c9bd59b05d61189674832d50ed634..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/encoder/utils/generic_utils.py +++ /dev/null @@ -1,182 +0,0 @@ -import datetime -import glob -import os -import random -import re - -import numpy as np -from scipy import signal - -from TTS.encoder.models.lstm import LSTMSpeakerEncoder -from TTS.encoder.models.resnet import ResNetSpeakerEncoder -from TTS.utils.io import save_fsspec - - -class AugmentWAV(object): - def __init__(self, ap, augmentation_config): - self.ap = ap - self.use_additive_noise = False - - if "additive" in augmentation_config.keys(): - self.additive_noise_config = augmentation_config["additive"] - additive_path = self.additive_noise_config["sounds_path"] - if additive_path: - self.use_additive_noise = True - # get noise types - self.additive_noise_types = [] - for key in self.additive_noise_config.keys(): - if isinstance(self.additive_noise_config[key], dict): - self.additive_noise_types.append(key) - - additive_files = glob.glob(os.path.join(additive_path, "**/*.wav"), recursive=True) - - self.noise_list = {} - - for wav_file in additive_files: - noise_dir = wav_file.replace(additive_path, "").split(os.sep)[0] - # ignore not listed directories - if noise_dir not in self.additive_noise_types: - continue - if not noise_dir in self.noise_list: - self.noise_list[noise_dir] = [] - self.noise_list[noise_dir].append(wav_file) - - print( - f" | > Using Additive Noise Augmentation: with {len(additive_files)} audios instances from {self.additive_noise_types}" - ) - - self.use_rir = False - - if "rir" in augmentation_config.keys(): - self.rir_config = augmentation_config["rir"] - if self.rir_config["rir_path"]: - self.rir_files = glob.glob(os.path.join(self.rir_config["rir_path"], "**/*.wav"), recursive=True) - self.use_rir = True - - print(f" | > Using RIR Noise Augmentation: with {len(self.rir_files)} audios instances") - - self.create_augmentation_global_list() - - def create_augmentation_global_list(self): - if self.use_additive_noise: - self.global_noise_list = self.additive_noise_types - else: - self.global_noise_list = [] - if self.use_rir: - self.global_noise_list.append("RIR_AUG") - - def additive_noise(self, noise_type, audio): - clean_db = 10 * np.log10(np.mean(audio**2) + 1e-4) - - noise_list = random.sample( - self.noise_list[noise_type], - random.randint( - self.additive_noise_config[noise_type]["min_num_noises"], - self.additive_noise_config[noise_type]["max_num_noises"], - ), - ) - - audio_len = audio.shape[0] - noises_wav = None - for noise in noise_list: - noiseaudio = self.ap.load_wav(noise, sr=self.ap.sample_rate)[:audio_len] - - if noiseaudio.shape[0] < audio_len: - continue - - noise_snr = random.uniform( - self.additive_noise_config[noise_type]["min_snr_in_db"], - self.additive_noise_config[noise_type]["max_num_noises"], - ) - noise_db = 10 * np.log10(np.mean(noiseaudio**2) + 1e-4) - noise_wav = np.sqrt(10 ** ((clean_db - noise_db - noise_snr) / 10)) * noiseaudio - - if noises_wav is None: - noises_wav = noise_wav - else: - noises_wav += noise_wav - - # if all possible files is less than audio, choose other files - if noises_wav is None: - return self.additive_noise(noise_type, audio) - - return audio + noises_wav - - def reverberate(self, audio): - audio_len = audio.shape[0] - - rir_file = random.choice(self.rir_files) - rir = self.ap.load_wav(rir_file, sr=self.ap.sample_rate) - rir = rir / np.sqrt(np.sum(rir**2)) - return signal.convolve(audio, rir, mode=self.rir_config["conv_mode"])[:audio_len] - - def apply_one(self, audio): - noise_type = random.choice(self.global_noise_list) - if noise_type == "RIR_AUG": - return self.reverberate(audio) - - return self.additive_noise(noise_type, audio) - - -def to_camel(text): - text = text.capitalize() - return re.sub(r"(?!^)_([a-zA-Z])", lambda m: m.group(1).upper(), text) - - -def setup_encoder_model(config: "Coqpit"): - if config.model_params["model_name"].lower() == "lstm": - model = LSTMSpeakerEncoder( - config.model_params["input_dim"], - config.model_params["proj_dim"], - config.model_params["lstm_dim"], - config.model_params["num_lstm_layers"], - use_torch_spec=config.model_params.get("use_torch_spec", False), - audio_config=config.audio, - ) - elif config.model_params["model_name"].lower() == "resnet": - model = ResNetSpeakerEncoder( - input_dim=config.model_params["input_dim"], - proj_dim=config.model_params["proj_dim"], - log_input=config.model_params.get("log_input", False), - use_torch_spec=config.model_params.get("use_torch_spec", False), - audio_config=config.audio, - ) - return model - - -def save_checkpoint(model, optimizer, criterion, model_loss, out_path, current_step, epoch): - checkpoint_path = "checkpoint_{}.pth".format(current_step) - checkpoint_path = os.path.join(out_path, checkpoint_path) - print(" | | > Checkpoint saving : {}".format(checkpoint_path)) - - new_state_dict = model.state_dict() - state = { - "model": new_state_dict, - "optimizer": optimizer.state_dict() if optimizer is not None else None, - "criterion": criterion.state_dict(), - "step": current_step, - "epoch": epoch, - "loss": model_loss, - "date": datetime.date.today().strftime("%B %d, %Y"), - } - save_fsspec(state, checkpoint_path) - - -def save_best_model(model, optimizer, criterion, model_loss, best_loss, out_path, current_step, epoch): - if model_loss < best_loss: - new_state_dict = model.state_dict() - state = { - "model": new_state_dict, - "optimizer": optimizer.state_dict(), - "criterion": criterion.state_dict(), - "step": current_step, - "epoch": epoch, - "loss": model_loss, - "date": datetime.date.today().strftime("%B %d, %Y"), - } - best_loss = model_loss - bestmodel_path = "best_model.pth" - bestmodel_path = os.path.join(out_path, bestmodel_path) - print("\n > BEST MODEL ({0:.5f}) : {1:}".format(model_loss, bestmodel_path)) - save_fsspec(state, bestmodel_path) - return best_loss diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/xtts/latent_encoder.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/xtts/latent_encoder.py deleted file mode 100644 index f9d62a36f1529ddd1e9e6fdd92afc5c9f224f827..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/xtts/latent_encoder.py +++ /dev/null @@ -1,141 +0,0 @@ -# ported from: Originally ported from: https://github.com/neonbjb/tortoise-tts - -import math - -import torch -from torch import nn -from torch.nn import functional as F - - -class GroupNorm32(nn.GroupNorm): - def forward(self, x): - return super().forward(x.float()).type(x.dtype) - - -def conv_nd(dims, *args, **kwargs): - if dims == 1: - return nn.Conv1d(*args, **kwargs) - elif dims == 2: - return nn.Conv2d(*args, **kwargs) - elif dims == 3: - return nn.Conv3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -def normalization(channels): - groups = 32 - if channels <= 16: - groups = 8 - elif channels <= 64: - groups = 16 - while channels % groups != 0: - groups = int(groups / 2) - assert groups > 2 - return GroupNorm32(groups, channels) - - -def zero_module(module): - for p in module.parameters(): - p.detach().zero_() - return module - - -class QKVAttention(nn.Module): - def __init__(self, n_heads): - super().__init__() - self.n_heads = n_heads - - def forward(self, qkv, mask=None, qk_bias=0): - """ - Apply QKV attention. - - :param qkv: an [N x (H * 3 * C) x T] tensor of Qs, Ks, and Vs. - :return: an [N x (H * C) x T] tensor after attention. - """ - bs, width, length = qkv.shape - assert width % (3 * self.n_heads) == 0 - ch = width // (3 * self.n_heads) - q, k, v = qkv.reshape(bs * self.n_heads, ch * 3, length).split(ch, dim=1) - scale = 1 / math.sqrt(math.sqrt(ch)) - weight = torch.einsum("bct,bcs->bts", q * scale, k * scale) # More stable with f16 than dividing afterwards - weight = weight + qk_bias - if mask is not None: - mask = mask.repeat(self.n_heads, 1, 1) - weight[mask.logical_not()] = -torch.inf - weight = torch.softmax(weight.float(), dim=-1).type(weight.dtype) - a = torch.einsum("bts,bcs->bct", weight, v) - - return a.reshape(bs, -1, length) - - -class AttentionBlock(nn.Module): - """An attention block that allows spatial positions to attend to each other.""" - - def __init__( - self, - channels, - num_heads=1, - num_head_channels=-1, - out_channels=None, - do_activation=False, - ): - super().__init__() - self.channels = channels - out_channels = channels if out_channels is None else out_channels - self.do_activation = do_activation - if num_head_channels == -1: - self.num_heads = num_heads - else: - assert ( - channels % num_head_channels == 0 - ), f"q,k,v channels {channels} is not divisible by num_head_channels {num_head_channels}" - self.num_heads = channels // num_head_channels - self.norm = normalization(channels) - self.qkv = conv_nd(1, channels, out_channels * 3, 1) - self.attention = QKVAttention(self.num_heads) - - self.x_proj = nn.Identity() if out_channels == channels else conv_nd(1, channels, out_channels, 1) - self.proj_out = zero_module(conv_nd(1, out_channels, out_channels, 1)) - - def forward(self, x, mask=None, qk_bias=0): - b, c, *spatial = x.shape - if mask is not None: - if len(mask.shape) == 2: - mask = mask.unsqueeze(0).repeat(x.shape[0], 1, 1) - if mask.shape[1] != x.shape[-1]: - mask = mask[:, : x.shape[-1], : x.shape[-1]] - - x = x.reshape(b, c, -1) - x = self.norm(x) - if self.do_activation: - x = F.silu(x, inplace=True) - qkv = self.qkv(x) - h = self.attention(qkv, mask=mask, qk_bias=qk_bias) - h = self.proj_out(h) - xp = self.x_proj(x) - return (xp + h).reshape(b, xp.shape[1], *spatial) - - -class ConditioningEncoder(nn.Module): - def __init__( - self, - spec_dim, - embedding_dim, - attn_blocks=6, - num_attn_heads=4, - ): - super().__init__() - attn = [] - self.init = nn.Conv1d(spec_dim, embedding_dim, kernel_size=1) - for a in range(attn_blocks): - attn.append(AttentionBlock(embedding_dim, num_attn_heads)) - self.attn = nn.Sequential(*attn) - self.dim = embedding_dim - - def forward(self, x): - """ - x: (b, 80, s) - """ - h = self.init(x) - h = self.attn(h) - return h diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/utils/distribute.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/utils/distribute.py deleted file mode 100644 index a51ef7661ece97c87c165ad1aba4c9d9700379dc..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/utils/distribute.py +++ /dev/null @@ -1,20 +0,0 @@ -# edited from https://github.com/fastai/imagenet-fast/blob/master/imagenet_nv/distributed.py -import torch -import torch.distributed as dist - - -def reduce_tensor(tensor, num_gpus): - rt = tensor.clone() - dist.all_reduce(rt, op=dist.reduce_op.SUM) - rt /= num_gpus - return rt - - -def init_distributed(rank, num_gpus, group_name, dist_backend, dist_url): - assert torch.cuda.is_available(), "Distributed mode requires CUDA." - - # Set cuda device so everything is done on the right GPU. - torch.cuda.set_device(rank % torch.cuda.device_count()) - - # Initialize distributed communication - dist.init_process_group(dist_backend, init_method=dist_url, world_size=num_gpus, rank=rank, group_name=group_name) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/examples/adaptive_span/README.md b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/examples/adaptive_span/README.md deleted file mode 100644 index d5224fb2894606a2a8027e01e224be190776ecfe..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/examples/adaptive_span/README.md +++ /dev/null @@ -1,90 +0,0 @@ -# Adaptive Span - -Adaptive Span is a novel self-attention mechanism that can learn its optimal -attention span. This allows us to extend significantly the maximum context size -used in Transformer, while maintaining control over their memory footprint -and computational time. It uses the Truncated BPTT technique for training, -as in [transformerXL](https://github.com/pytorch/fairseq/blob/main/examples/truncated_bptt/README.md). - -Adaptive Span was introduced by paper: -[Adaptive Attention Span in Transformers](https://arxiv.org/abs/1905.07799), -which achieved state-of-the-art language modeling results at the time of publication. - -We manage to reproduce their result in fairseq and keep most of the -[original implementation](https://github.com/facebookresearch/adaptive-span) untouched. -You can refer to the their sweep file as well if any combination of hyperparameter is not clear. - -##### 0. Setup - -First you need to process the Enwik8 dataset, we use the pre-tokenized dataset -from [adaptive span paper](https://github.com/facebookresearch/adaptive-span/blob/master/get_data.sh). -You can download the dataset, and then run: -```bash -fairseq-preprocess --only-source --trainpref ~/data/enwik8/train.txt \ - --validpref ~/data/enwik8/valid.txt --testpref ~/data/enwik8/test.txt \ - --destdir ~/data/enwik8/data-bin/ --joined-dictionary --workers 20 -``` - -##### 1. Train a Adaptive Span model on Enwik8 - -We will train a 12-layer Adaptive Span model following the [hyperparameters -used in the original -paper](https://github.com/facebookresearch/adaptive-span/blob/master/experiments/enwik8.sh). - -The following command assumes 4 GPUs, so that the total batch size is 64 -sequences (4 x 16). Training should take 2-3 days on 4 V100 GPUs: -```bash -CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train \ - --user-dir examples/adaptive_span \ - --data ~/data/enwik8/data-bin/ \ - --fp16 --fp16-no-flatten-grads --max-update 600000 \ - --task truncated_bptt_lm --tokens-per-sample 512 --arch adaptive_span \ - --n-layer 12 --d-model 512 --n-head 8 --d-inner 2048 --dropout 0.3 \ - --attn-span 8192 --optimizer adagrad_with_grad_clip --adagrad-clip 0.03 \ - --validate-interval-updates 1000 \ - --lr-scheduler fixed --warmup-updates 32000 --batch-size-valid 32 \ - --lr 0.07 --criterion adaptive_span_loss --batch-size 16 --update-freq 1 \ - --seed 2 --log-format json --log-interval 25 --aux-loss-scaler 5e-07 -``` -This should land around 1.05 on validation, 1.03 on test. You can lower the ---aux-loss-scaler for better performance (longer span). It gives ~0.03 bpc -improvement to the transformerXL baseline here. -If training on a single GPU, set `--update-freq=4` to accumulate 4x gradients -and simulate training on 4 GPUs. -You can also reproduce the transformerXL result on enwik8 using this code base. -It should land around 1.06 on test,matching the [original paper](https://github.com/kimiyoung/transformer-xl/blob/master/pytorch/run_enwik8_base.sh). -You can try by -```bash -CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train \ - --user-dir examples/truncated_bptt \ - ~/data/enwik8/data-bin/ \ - --task truncated_bptt_lm --fp16 --max-update 400000 \ - --tokens-per-sample 512 --arch transformer_xl --n-layer 12 \ - --d-model 512 --n-head 8 --d-head 64 --d-inner 2048 --dropout 0.1 \ - --dropatt 0.0 --mem-len 512 --optimizer adam --clip-norm 0.25 \ - --lr-scheduler cosine --warmup-updates 0 \ - --lr 0.0 --lr 0.00025 --batch-size 15 \ - --update-freq 1 --seed 2 --log-format json --log-interval 25 \ - --fp16 -``` - -##### 2. Evaluate -For Adaptive Span: -```bash -fairseq-eval-lm ~/data/enwik8/data-bin/ --path model/checkpoint_best.pt \ - --user-dir examples/adaptive_span \ - --task truncated_bptt_lm --batch-size 8 --tokens-per-sample 512 --gen-subset test -``` -For Transformer-XL evaluation: -```bash -fairseq-eval-lm ~/data/enwik8/data-bin/ --path model/checkpoint_best.pt \ - --user-dir examples/truncated_bptt/ --task truncated_bptt_lm --batch-size 8 \ - --tokens-per-sample 80 \ - --model-overrides '{"mem_len":2100,"clamp_len":820,"same_length":True}' \ - --gen-subset valid -``` - -*Note:* During training the model saw 512 tokens of context -(``--tokens-per-sample=512``), with batch size 8. These settings match the evaluation -settings from [the original -paper](https://github.com/facebookresearch/adaptive-span/blob/master/experiments/enwik8.sh). diff --git a/spaces/asigalov61/Euterpe-X/javascript/README.md b/spaces/asigalov61/Euterpe-X/javascript/README.md deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Omer.html b/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Omer.html deleted file mode 100644 index 294ea99b499e70cccfb5b1d361ebf8df5e02205d..0000000000000000000000000000000000000000 --- a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Omer.html +++ /dev/null @@ -1,134 +0,0 @@ - - - - Omer - - - - -
    -

    Omer

    - -
    -
    SWE mentor - from the waitlist

    https://www.linkedin.com/in/rdobu/

    • FS developer, engineering manager

    Mentorship exp
    • already doing this for free
    • A friend's brother wanted help
      • I gave him books and stuff
      • checked in every couple of months
      • he got rejected at my company and I helped him up his skills and get back in 

    He was rushed due to a last-minute meeting. Will re book


    What do beginners struggle with?


    What can you help with and how?

    -
    -
    Questions about SM


    -
    - -
    - - - \ No newline at end of file diff --git a/spaces/awacke1/BigScienceBloomRootsMemory/app.py b/spaces/awacke1/BigScienceBloomRootsMemory/app.py deleted file mode 100644 index f35a3683a3e6d4966029101e7f529374a3ffacba..0000000000000000000000000000000000000000 --- a/spaces/awacke1/BigScienceBloomRootsMemory/app.py +++ /dev/null @@ -1,284 +0,0 @@ -import http.client as http_client -import json -import logging -import os -import re -import string -import traceback - -import gradio as gr -import requests -from huggingface_hub import HfApi - -hf_api = HfApi() -roots_datasets = {dset.id.split("/")[-1]:dset for dset in hf_api.list_datasets(author="bigscience-data", use_auth_token=os.environ.get("bigscience_data_token"))} -#roots_datasets = {dset.id.split("/")[-1]:dset for dset in hf_api.list_datasets(author="bigscience-data", use_auth_token=os.environ.get("HF_TOKEN"))} - -def get_docid_html(docid): - data_org, dataset, docid = docid.split("/") - metadata = roots_datasets[dataset] - if metadata.private: - docid_html = ( - f"🔒{dataset}/{docid}' - ) - else: - docid_html = ( - f"{dataset}/{docid}' - ) - return docid_html - - -PII_TAGS = {"KEY", "EMAIL", "USER", "IP_ADDRESS", "ID", "IPv4", "IPv6"} -PII_PREFIX = "PI:" - - -def process_pii(text): - for tag in PII_TAGS: - text = text.replace( - PII_PREFIX + tag, - """REDACTED {}""".format(tag), - ) - return text - - -def process_results(results, highlight_terms): - if len(results) == 0: - return """

    - No results retrieved.



    """ - - results_html = "" - for result in results: - tokens = result["text"].split() - tokens_html = [] - for token in tokens: - if token in highlight_terms: - tokens_html.append("{}".format(token)) - else: - tokens_html.append(token) - tokens_html = " ".join(tokens_html) - tokens_html = process_pii(tokens_html) - meta_html = ( - """ -

    - {}

    """.format( - result["meta"]["url"], result["meta"]["url"] - ) - if "meta" in result and result["meta"] is not None and "url" in result["meta"] - else "" - ) - docid_html = get_docid_html(result["docid"]) - results_html += """{} -

    Document ID: {}

    -

    Language: {}

    -

    {}

    -
    - """.format( - meta_html, docid_html, result["lang"], tokens_html - ) - return results_html + "
    " - - -def scisearch(query, language, num_results=10): - try: - query = " ".join(query.split()) - if query == "" or query is None: - return "" - - post_data = {"query": query, "k": num_results} - if language != "detect_language": - post_data["lang"] = language - - output = requests.post( - os.environ.get("address"), - headers={"Content-type": "application/json"}, - data=json.dumps(post_data), - timeout=60, - ) - - payload = json.loads(output.text) - - if "err" in payload: - if payload["err"]["type"] == "unsupported_lang": - detected_lang = payload["err"]["meta"]["detected_lang"] - return f""" -

    - Detected language {detected_lang} is not supported.
    - Please choose a language from the dropdown or type another query. -




    """ - - results = payload["results"] - highlight_terms = payload["highlight_terms"] - - if language == "detect_language": - results = list(results.values())[0] - return ( - ( - f"""

    - Detected language: {results[0]["lang"]}




    """ - if len(results) > 0 and language == "detect_language" - else "" - ) - + process_results(results, highlight_terms) - ) - - if language == "all": - results_html = "" - for lang, results_for_lang in results.items(): - if len(results_for_lang) == 0: - results_html += f"""

    - No results for language: {lang}


    """ - continue - - collapsible_results = f""" -
    - - Results for language: {lang}
    -
    - {process_results(results_for_lang, highlight_terms)} -
    """ - results_html += collapsible_results - return results_html - - results = list(results.values())[0] - return process_results(results, highlight_terms) - - except Exception as e: - results_html = f""" -

    - Raised {type(e).__name__}

    -

    - Check if a relevant discussion already exists in the Community tab. If not, please open a discussion. -

    - """ - print(e) - print(traceback.format_exc()) - - return results_html - - -def flag(query, language, num_results, issue_description): - try: - post_data = {"query": query, "k": num_results, "flag": True, "description": issue_description} - if language != "detect_language": - post_data["lang"] = language - - output = requests.post( - os.environ.get("address"), - headers={"Content-type": "application/json"}, - data=json.dumps(post_data), - timeout=120, - ) - - results = json.loads(output.text) - except: - print("Error flagging") - return "" - - -description = """#

    🌸 🔎 Bloom Searcher 🔍 🌸

    -Tool design for Roots: [URL](https://huggingface.co/spaces/bigscience-data/scisearch/blob/main/roots_search_tool_specs.pdf). -Bloom on Wikipedia: [URL](https://en.wikipedia.org/wiki/BLOOM_(language_model)). -Bloom Video Playlist: [URL](https://www.youtube.com/playlist?list=PLHgX2IExbFouqnsIqziThlPCX_miiDq14). -Access full corpus check [URL](https://forms.gle/qyYswbEL5kA23Wu99). - -Big Science - How to get started -Big Science is a 176B parameter new ML model that was trained on a set of datasets for Natural Language processing, and many other tasks that are not yet explored.. Below is the set of the papers, models, links, and datasets around big science which promises to be the best, most recent large model of its kind benefitting all science pursuits. - -Model: https://huggingface.co/bigscience/bloom -Papers: -BLOOM: A 176B-Parameter Open-Access Multilingual Language Model https://arxiv.org/abs/2211.05100 -Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism https://arxiv.org/abs/1909.08053 -8-bit Optimizers via Block-wise Quantization https://arxiv.org/abs/2110.02861 -Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation https://arxiv.org/abs/2108.12409 -https://huggingface.co/models?other=doi:10.57967/hf/0003 -217 Other Models optimizing use of bloom via specialization: https://huggingface.co/models?other=bloom -Datasets -Universal Dependencies: https://paperswithcode.com/dataset/universal-dependencies -WMT 2014: https://paperswithcode.com/dataset/wmt-2014 -The Pile: https://paperswithcode.com/dataset/the-pile -HumanEval: https://paperswithcode.com/dataset/humaneval -FLORES-101: https://paperswithcode.com/dataset/flores-101 -CrowS-Pairs: https://paperswithcode.com/dataset/crows-pairs -WikiLingua: https://paperswithcode.com/dataset/wikilingua -MTEB: https://paperswithcode.com/dataset/mteb -xP3: https://paperswithcode.com/dataset/xp3 -DiaBLa: https://paperswithcode.com/dataset/diabla - -""" - - -if __name__ == "__main__": - demo = gr.Blocks( - css=".underline-on-hover:hover { text-decoration: underline; } .flagging { font-size:12px; color:Silver; }" - ) - - with demo: - with gr.Row(): - gr.Markdown(value=description) - with gr.Row(): - query = gr.Textbox(lines=1, max_lines=1, placeholder="Type your query here...", label="Query") - with gr.Row(): - lang = gr.Dropdown( - choices=[ - "ar", - "ca", - "code", - "en", - "es", - "eu", - "fr", - "id", - "indic", - "nigercongo", - "pt", - "vi", - "zh", - "detect_language", - "all", - ], - value="en", - label="Language", - ) - with gr.Row(): - k = gr.Slider(1, 100, value=10, step=1, label="Max Results") - with gr.Row(): - submit_btn = gr.Button("Submit") - with gr.Row(): - results = gr.HTML(label="Results") - flag_description = """ -

    - If you choose to flag your search, we will save the query, language and the number of results you requested. - Please consider adding any additional context in the box on the right.

    """ - with gr.Column(visible=False) as flagging_form: - flag_txt = gr.Textbox( - lines=1, - placeholder="Type here...", - label="""If you choose to flag your search, we will save the query, language and the number of results - you requested. Please consider adding relevant additional context below:""", - ) - flag_btn = gr.Button("Flag Results") - flag_btn.click(flag, inputs=[query, lang, k, flag_txt], outputs=[flag_txt]) - - def submit(query, lang, k): - query = query.strip() - if query is None or query == "": - return "", "" - return { - results: scisearch(query, lang, k), - flagging_form: gr.update(visible=True), - } - - query.submit(fn=submit, inputs=[query, lang, k], outputs=[results, flagging_form]) - submit_btn.click(submit, inputs=[query, lang, k], outputs=[results, flagging_form]) - - demo.launch(enable_queue=True, debug=True) \ No newline at end of file diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/postprocessing/SSAOPass.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/postprocessing/SSAOPass.js deleted file mode 100644 index 98b2ef170e28a668f69b7197184d2424ac0ed4f0..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/postprocessing/SSAOPass.js +++ /dev/null @@ -1,404 +0,0 @@ -/** - * @author Mugen87 / https://github.com/Mugen87 - */ - -THREE.SSAOPass = function ( scene, camera, width, height ) { - - THREE.Pass.call( this ); - - this.width = ( width !== undefined ) ? width : 512; - this.height = ( height !== undefined ) ? height : 512; - - this.clear = true; - - this.camera = camera; - this.scene = scene; - - this.kernelRadius = 8; - this.kernelSize = 32; - this.kernel = []; - this.noiseTexture = null; - this.output = 0; - - this.minDistance = 0.005; - this.maxDistance = 0.1; - - // - - this.generateSampleKernel(); - this.generateRandomKernelRotations(); - - // beauty render target with depth buffer - - var depthTexture = new THREE.DepthTexture(); - depthTexture.type = THREE.UnsignedShortType; - depthTexture.minFilter = THREE.NearestFilter; - depthTexture.maxFilter = THREE.NearestFilter; - - this.beautyRenderTarget = new THREE.WebGLRenderTarget( this.width, this.height, { - minFilter: THREE.LinearFilter, - magFilter: THREE.LinearFilter, - format: THREE.RGBAFormat, - depthTexture: depthTexture, - depthBuffer: true - } ); - - // normal render target - - this.normalRenderTarget = new THREE.WebGLRenderTarget( this.width, this.height, { - minFilter: THREE.NearestFilter, - magFilter: THREE.NearestFilter, - format: THREE.RGBAFormat - } ); - - // ssao render target - - this.ssaoRenderTarget = new THREE.WebGLRenderTarget( this.width, this.height, { - minFilter: THREE.LinearFilter, - magFilter: THREE.LinearFilter, - format: THREE.RGBAFormat - } ); - - this.blurRenderTarget = this.ssaoRenderTarget.clone(); - - // ssao material - - if ( THREE.SSAOShader === undefined ) { - - console.error( 'THREE.SSAOPass: The pass relies on THREE.SSAOShader.' ); - - } - - this.ssaoMaterial = new THREE.ShaderMaterial( { - defines: Object.assign( {}, THREE.SSAOShader.defines ), - uniforms: THREE.UniformsUtils.clone( THREE.SSAOShader.uniforms ), - vertexShader: THREE.SSAOShader.vertexShader, - fragmentShader: THREE.SSAOShader.fragmentShader, - blending: THREE.NoBlending - } ); - - this.ssaoMaterial.uniforms[ 'tDiffuse' ].value = this.beautyRenderTarget.texture; - this.ssaoMaterial.uniforms[ 'tNormal' ].value = this.normalRenderTarget.texture; - this.ssaoMaterial.uniforms[ 'tDepth' ].value = this.beautyRenderTarget.depthTexture; - this.ssaoMaterial.uniforms[ 'tNoise' ].value = this.noiseTexture; - this.ssaoMaterial.uniforms[ 'kernel' ].value = this.kernel; - this.ssaoMaterial.uniforms[ 'cameraNear' ].value = this.camera.near; - this.ssaoMaterial.uniforms[ 'cameraFar' ].value = this.camera.far; - this.ssaoMaterial.uniforms[ 'resolution' ].value.set( this.width, this.height ); - this.ssaoMaterial.uniforms[ 'cameraProjectionMatrix' ].value.copy( this.camera.projectionMatrix ); - this.ssaoMaterial.uniforms[ 'cameraInverseProjectionMatrix' ].value.getInverse( this.camera.projectionMatrix ); - - // normal material - - this.normalMaterial = new THREE.MeshNormalMaterial(); - this.normalMaterial.blending = THREE.NoBlending; - - // blur material - - this.blurMaterial = new THREE.ShaderMaterial( { - defines: Object.assign( {}, THREE.SSAOBlurShader.defines ), - uniforms: THREE.UniformsUtils.clone( THREE.SSAOBlurShader.uniforms ), - vertexShader: THREE.SSAOBlurShader.vertexShader, - fragmentShader: THREE.SSAOBlurShader.fragmentShader - } ); - this.blurMaterial.uniforms[ 'tDiffuse' ].value = this.ssaoRenderTarget.texture; - this.blurMaterial.uniforms[ 'resolution' ].value.set( this.width, this.height ); - - // material for rendering the depth - - this.depthRenderMaterial = new THREE.ShaderMaterial( { - defines: Object.assign( {}, THREE.SSAODepthShader.defines ), - uniforms: THREE.UniformsUtils.clone( THREE.SSAODepthShader.uniforms ), - vertexShader: THREE.SSAODepthShader.vertexShader, - fragmentShader: THREE.SSAODepthShader.fragmentShader, - blending: THREE.NoBlending - } ); - this.depthRenderMaterial.uniforms[ 'tDepth' ].value = this.beautyRenderTarget.depthTexture; - this.depthRenderMaterial.uniforms[ 'cameraNear' ].value = this.camera.near; - this.depthRenderMaterial.uniforms[ 'cameraFar' ].value = this.camera.far; - - // material for rendering the content of a render target - - this.copyMaterial = new THREE.ShaderMaterial( { - uniforms: THREE.UniformsUtils.clone( THREE.CopyShader.uniforms ), - vertexShader: THREE.CopyShader.vertexShader, - fragmentShader: THREE.CopyShader.fragmentShader, - transparent: true, - depthTest: false, - depthWrite: false, - blendSrc: THREE.DstColorFactor, - blendDst: THREE.ZeroFactor, - blendEquation: THREE.AddEquation, - blendSrcAlpha: THREE.DstAlphaFactor, - blendDstAlpha: THREE.ZeroFactor, - blendEquationAlpha: THREE.AddEquation - } ); - - this.fsQuad = new THREE.Pass.FullScreenQuad( null ); - - this.originalClearColor = new THREE.Color(); - -}; - -THREE.SSAOPass.prototype = Object.assign( Object.create( THREE.Pass.prototype ), { - - constructor: THREE.SSAOPass, - - dispose: function () { - - // dispose render targets - - this.beautyRenderTarget.dispose(); - this.normalRenderTarget.dispose(); - this.ssaoRenderTarget.dispose(); - this.blurRenderTarget.dispose(); - - // dispose geometry - - this.quad.geometry.dispose(); - - // dispose materials - - this.normalMaterial.dispose(); - this.blurMaterial.dispose(); - this.copyMaterial.dispose(); - this.depthRenderMaterial.dispose(); - - }, - - render: function ( renderer, writeBuffer /*, readBuffer, deltaTime, maskActive */ ) { - - // render beauty and depth - - renderer.setRenderTarget( this.beautyRenderTarget ); - renderer.clear(); - renderer.render( this.scene, this.camera ); - - // render normals - - this.renderOverride( renderer, this.normalMaterial, this.normalRenderTarget, 0x7777ff, 1.0 ); - - // render SSAO - - this.ssaoMaterial.uniforms[ 'kernelRadius' ].value = this.kernelRadius; - this.ssaoMaterial.uniforms[ 'minDistance' ].value = this.minDistance; - this.ssaoMaterial.uniforms[ 'maxDistance' ].value = this.maxDistance; - this.renderPass( renderer, this.ssaoMaterial, this.ssaoRenderTarget ); - - // render blur - - this.renderPass( renderer, this.blurMaterial, this.blurRenderTarget ); - - // output result to screen - - switch ( this.output ) { - - case THREE.SSAOPass.OUTPUT.SSAO: - - this.copyMaterial.uniforms[ 'tDiffuse' ].value = this.ssaoRenderTarget.texture; - this.copyMaterial.blending = THREE.NoBlending; - this.renderPass( renderer, this.copyMaterial, null ); - - break; - - case THREE.SSAOPass.OUTPUT.Blur: - - this.copyMaterial.uniforms[ 'tDiffuse' ].value = this.blurRenderTarget.texture; - this.copyMaterial.blending = THREE.NoBlending; - this.renderPass( renderer, this.copyMaterial, null ); - - break; - - case THREE.SSAOPass.OUTPUT.Beauty: - - this.copyMaterial.uniforms[ 'tDiffuse' ].value = this.beautyRenderTarget.texture; - this.copyMaterial.blending = THREE.NoBlending; - this.renderPass( renderer, this.copyMaterial, null ); - - break; - - case THREE.SSAOPass.OUTPUT.Depth: - - this.renderPass( renderer, this.depthRenderMaterial, null ); - - break; - - case THREE.SSAOPass.OUTPUT.Normal: - - this.copyMaterial.uniforms[ 'tDiffuse' ].value = this.normalRenderTarget.texture; - this.copyMaterial.blending = THREE.NoBlending; - this.renderPass( renderer, this.copyMaterial, null ); - - break; - - case THREE.SSAOPass.OUTPUT.Default: - - this.copyMaterial.uniforms[ 'tDiffuse' ].value = this.beautyRenderTarget.texture; - this.copyMaterial.blending = THREE.NoBlending; - this.renderPass( renderer, this.copyMaterial, null ); - - this.copyMaterial.uniforms[ 'tDiffuse' ].value = this.blurRenderTarget.texture; - this.copyMaterial.blending = THREE.CustomBlending; - this.renderPass( renderer, this.copyMaterial, this.renderToScreen ? null : writeBuffer ); - - break; - - default: - console.warn( 'THREE.SSAOPass: Unknown output type.' ); - - } - - }, - - renderPass: function ( renderer, passMaterial, renderTarget, clearColor, clearAlpha ) { - - // save original state - this.originalClearColor.copy( renderer.getClearColor() ); - var originalClearAlpha = renderer.getClearAlpha(); - var originalAutoClear = renderer.autoClear; - - renderer.setRenderTarget( renderTarget ); - - // setup pass state - renderer.autoClear = false; - if ( ( clearColor !== undefined ) && ( clearColor !== null ) ) { - - renderer.setClearColor( clearColor ); - renderer.setClearAlpha( clearAlpha || 0.0 ); - renderer.clear(); - - } - - this.fsQuad.material = passMaterial; - this.fsQuad.render( renderer ); - - // restore original state - renderer.autoClear = originalAutoClear; - renderer.setClearColor( this.originalClearColor ); - renderer.setClearAlpha( originalClearAlpha ); - - }, - - renderOverride: function ( renderer, overrideMaterial, renderTarget, clearColor, clearAlpha ) { - - this.originalClearColor.copy( renderer.getClearColor() ); - var originalClearAlpha = renderer.getClearAlpha(); - var originalAutoClear = renderer.autoClear; - - renderer.setRenderTarget( renderTarget ); - renderer.autoClear = false; - - clearColor = overrideMaterial.clearColor || clearColor; - clearAlpha = overrideMaterial.clearAlpha || clearAlpha; - - if ( ( clearColor !== undefined ) && ( clearColor !== null ) ) { - - renderer.setClearColor( clearColor ); - renderer.setClearAlpha( clearAlpha || 0.0 ); - renderer.clear(); - - } - - this.scene.overrideMaterial = overrideMaterial; - renderer.render( this.scene, this.camera ); - this.scene.overrideMaterial = null; - - // restore original state - - renderer.autoClear = originalAutoClear; - renderer.setClearColor( this.originalClearColor ); - renderer.setClearAlpha( originalClearAlpha ); - - }, - - setSize: function ( width, height ) { - - this.width = width; - this.height = height; - - this.beautyRenderTarget.setSize( width, height ); - this.ssaoRenderTarget.setSize( width, height ); - this.normalRenderTarget.setSize( width, height ); - this.blurRenderTarget.setSize( width, height ); - - this.ssaoMaterial.uniforms[ 'resolution' ].value.set( width, height ); - this.ssaoMaterial.uniforms[ 'cameraProjectionMatrix' ].value.copy( this.camera.projectionMatrix ); - this.ssaoMaterial.uniforms[ 'cameraInverseProjectionMatrix' ].value.getInverse( this.camera.projectionMatrix ); - - this.blurMaterial.uniforms[ 'resolution' ].value.set( width, height ); - - }, - - generateSampleKernel: function () { - - var kernelSize = this.kernelSize; - var kernel = this.kernel; - - for ( var i = 0; i < kernelSize; i ++ ) { - - var sample = new THREE.Vector3(); - sample.x = ( Math.random() * 2 ) - 1; - sample.y = ( Math.random() * 2 ) - 1; - sample.z = Math.random(); - - sample.normalize(); - - var scale = i / kernelSize; - scale = THREE.Math.lerp( 0.1, 1, scale * scale ); - sample.multiplyScalar( scale ); - - kernel.push( sample ); - - } - - }, - - generateRandomKernelRotations: function () { - - var width = 4, height = 4; - - if ( SimplexNoise === undefined ) { - - console.error( 'THREE.SSAOPass: The pass relies on THREE.SimplexNoise.' ); - - } - - var simplex = new SimplexNoise(); - - var size = width * height; - var data = new Float32Array( size * 4 ); - - for ( var i = 0; i < size; i ++ ) { - - var stride = i * 4; - - var x = ( Math.random() * 2 ) - 1; - var y = ( Math.random() * 2 ) - 1; - var z = 0; - - var noise = simplex.noise3d( x, y, z ); - - data[ stride ] = noise; - data[ stride + 1 ] = noise; - data[ stride + 2 ] = noise; - data[ stride + 3 ] = 1; - - } - - this.noiseTexture = new THREE.DataTexture( data, width, height, THREE.RGBAFormat, THREE.FloatType ); - this.noiseTexture.wrapS = THREE.RepeatWrapping; - this.noiseTexture.wrapT = THREE.RepeatWrapping; - this.noiseTexture.needsUpdate = true; - - } - -} ); - -THREE.SSAOPass.OUTPUT = { - 'Default': 0, - 'SSAO': 1, - 'Blur': 2, - 'Beauty': 3, - 'Depth': 4, - 'Normal': 5 -}; diff --git a/spaces/bankholdup/stylegan_petbreeder/e4e/criteria/lpips/__init__.py b/spaces/bankholdup/stylegan_petbreeder/e4e/criteria/lpips/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/bentrevett/emotion-prediction/app.py b/spaces/bentrevett/emotion-prediction/app.py deleted file mode 100644 index 85a01123a594e766ccce9608df2d1f8f7e22bdf5..0000000000000000000000000000000000000000 --- a/spaces/bentrevett/emotion-prediction/app.py +++ /dev/null @@ -1,49 +0,0 @@ -import streamlit as st -import transformers -import matplotlib.pyplot as plt - - -@st.cache(allow_output_mutation=True, show_spinner=False) -def get_pipe(): - model_name = "joeddav/distilbert-base-uncased-go-emotions-student" - model = transformers.AutoModelForSequenceClassification.from_pretrained(model_name) - tokenizer = transformers.AutoTokenizer.from_pretrained(model_name) - pipe = transformers.pipeline('text-classification', model=model, tokenizer=tokenizer, - return_all_scores=True, truncation=True) - return pipe - - -def sort_predictions(predictions): - return sorted(predictions, key=lambda x: x['score'], reverse=True) - - -st.set_page_config(page_title="Emotion Prediction") -st.title("Emotion Prediction") -st.write("Type text into the text box and then press 'Predict' to get the predicted emotion.") - -default_text = "I really love using HuggingFace Spaces!" - -text = st.text_area('Enter text here:', value=default_text) -submit = st.button('Predict') - -with st.spinner("Loading model..."): - pipe = get_pipe() - -if (submit and len(text.strip()) > 0) or len(text.strip()) > 0: - - prediction = pipe(text)[0] - prediction = sort_predictions(prediction) - - fig, ax = plt.subplots() - ax.bar(x=[i for i, _ in enumerate(prediction)], - height=[p['score'] for p in prediction], - tick_label=[p['label'] for p in prediction]) - ax.tick_params(rotation=90) - ax.set_ylim(0, 1) - - st.header('Prediction:') - st.pyplot(fig) - - prediction = dict([(p['label'], p['score']) for p in prediction]) - st.header('Raw values:') - st.json(prediction) diff --git a/spaces/bhaskartripathi/Text2Diagram/app.py b/spaces/bhaskartripathi/Text2Diagram/app.py deleted file mode 100644 index 10dad40550502dfaa321d1e50eb54734e514f661..0000000000000000000000000000000000000000 --- a/spaces/bhaskartripathi/Text2Diagram/app.py +++ /dev/null @@ -1,72 +0,0 @@ -import gradio as gr -import openai - - -def generate_plantuml2(api_key, text): - openai.api_key = api_key - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=[ - { - "role": "system", - "content": "You are ChatGPT, a large language model trained by OpenAI. Generate PlantUML code for the following use case or code in natural language.", - }, - {"role": "user", "content": text}, - ], - ) - print(response) - return response["choices"][0]["message"]["content"] - -def generate_plantuml(api_key, text): - openai.api_key = api_key - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=[ - { - "role": "system", - "content": "You are ChatGPT, a large language model trained by OpenAI. Generate PlantUML code for an architecture that uses Azure services and icons for the following use case or code in natural language. Choose the actors appropriately as per the use case. Do not use '!define SPRITESURL https://raw.githubusercontent.com/rabelenda/cicon-plantuml-sprites/v1.1.0/sprites' as it is outdated.", - }, - {"role": "user", "content": text}, - ], - ) - print(response) - return response["choices"][0]["message"]["content"] - -sample_text = ''' -!define AzurePuml https://raw.githubusercontent.com/plantuml-stdlib/Azure-PlantUML/master/dist -!includeurl AzurePuml/AzureCommon.puml - -actor Customer -actor Restaurant - -Customer -> AzureAPIManagement : Login -AzureAPIManagement -> AzureActiveDirectory : Authenticate User -AzureActiveDirectory -> AzureAPIManagement : Return User Info - -Customer -> AzureAPIManagement : Place Order -AzureAPIManagement -> AzureFunctionApp : Process Order -AzureFunctionApp -> AzureCosmosDB : Store Order Data -AzureFunctionApp -> Restaurant : Send Order Details -Restaurant -> AzureFunctionApp : Update Order Status -AzureFunctionApp -> AzureCosmosDB : Update Order Data -AzureFunctionApp -> Customer : Send Order Status - -AzureFunctionApp -> AzureNotificationHubs : Send Push Notification - -legend right - Online Food Ordering App Architecture -endlegend -''' - -iface = gr.Interface( - fn=generate_plantuml, - inputs=[ - gr.inputs.Textbox(label="OpenAI API Key"), - gr.inputs.Textbox(label="Enter use case or code in natural language") - ], - outputs=gr.outputs.Textbox(label="Generated PlantUML Code"), - title="PlantUML Code Generator", - description="Generate PlantUML code using OpenAI's GPT-3.5-Turbo", -) - -iface.launch() \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Atomic Mail Verifier Download Crack For Gta.md b/spaces/bioriAsaeru/text-to-voice/Atomic Mail Verifier Download Crack For Gta.md deleted file mode 100644 index 7851da2331a8592089cdd05dfc0db2b56e13b57b..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Atomic Mail Verifier Download Crack For Gta.md +++ /dev/null @@ -1,9 +0,0 @@ -
    -

    codigo limpio anaya pdf
    5 fjali pohore dhe fjalit mohore.rar
    super excellent academic intelligence book pdf free download
    Bon Iver - Bon Iver(2011) [FLAC][DELUXE EDITION].rar
    Mindjet MindManager 2018 18.1.155 Crack .rar
    paradisebirds anna and nelly
    descarga crack para civilcad 2013 32 bits
    Dum Laga Ke Haisha hd 720p movie download
    billboard top 100 songs 2008 torrent
    Need For Speed NFS Most Wanted Black Edition repack Mr DJ download

    -

    atomic mail verifier download crack for gta


    DOWNLOAD 🆗 https://urloso.com/2uyP6k



    -

    tinker tailor soldier spy 720p download movie
    GraphicRiver CD Case Mock Up
    gta 4 full game highly compressed 100 mb free 19
    Imgsrc ru password list
    X-Men: Apocalypse (English) 2 full movie download in 720p hd
    download Yeh Dillagi 720p hd
    awara bengali full movie 720p download 11
    imbratisare in amurg sandra brown pdf download
    farming simulator 2015 crack multiplayer
    plant anatomy book by b p pandey pdf 241

    -

    download harry potter and the deathly hallows part 1 in hindi 720p
    Classical mechanics by gupta kumar sharma pdf
    Kuch Kuch Hota Hai kickass download movie
    el libro rojo de las matematicas moises villena pdf download
    download crack artcam 2008 torrent
    Paheli 720p in dual audio hindi
    download Prem movie in hindi 720p
    Adobe Photoshop CS6 v. 13.0 Keygen PASSWORD.txt.rar
    tktorrents tamil movies.com
    The Last Witch Hunter (English) 2 in hindi 720p torrent

    -

    Copyspider 1.1.16 key generator
    Young Strawberry-patch35-ira11 81 BD-Company BD-Team Lolitaguy lolita.14
    crack solidworks 2012 64 bit windows 8 solid squad
    descargar tricalc 8.0 full espavol gratis
    Shirin Farhad Ki Toh Nikal Padi full movies 720p torrent
    mozilla firefox 3.8 download
    2 chainz i'm different mp3 download
    Airlift movie 720p download movie
    astro vision lifesign software with crack
    mp4 sibel kekili porno indir

    -

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Descargar Super Smash Bros Brawl Iso Gua completa para instalar y jugar en tu PC.md b/spaces/bioriAsaeru/text-to-voice/Descargar Super Smash Bros Brawl Iso Gua completa para instalar y jugar en tu PC.md deleted file mode 100644 index 1d659b6e62b57cc29c54502852d39d3d540bc41a..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Descargar Super Smash Bros Brawl Iso Gua completa para instalar y jugar en tu PC.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Descargar Super Smash Bros Brawl Iso


    Download Zip ✫✫✫ https://urloso.com/2uyQuQ



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Hypertherm Pronest 2012 rar 40 The Software That Supports All Major Brands Models and Cut Processes.md b/spaces/bioriAsaeru/text-to-voice/Hypertherm Pronest 2012 rar 40 The Software That Supports All Major Brands Models and Cut Processes.md deleted file mode 100644 index c7c3bff51db8505dcd66dc0f8d15111660321ca7..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Hypertherm Pronest 2012 rar 40 The Software That Supports All Major Brands Models and Cut Processes.md +++ /dev/null @@ -1,6 +0,0 @@ -

    {hypertherm Pronest 2012 rar} 40


    Download Zip ———>>> https://urloso.com/2uyPyN



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bodah/RVC-Models-bo/README.md b/spaces/bodah/RVC-Models-bo/README.md deleted file mode 100644 index d7daba58e556887b4fac7494ec118958421a3c2f..0000000000000000000000000000000000000000 --- a/spaces/bodah/RVC-Models-bo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: RVC2 Locochon -emoji: 🔥 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -duplicated_from: Ukrania/RVC-Models ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bradarrML/diffuse-the-rest/build/_app/immutable/start-449f9cce.js b/spaces/bradarrML/diffuse-the-rest/build/_app/immutable/start-449f9cce.js deleted file mode 100644 index 9571c158c2e52f2199fd7216562c5da9bb2d6c36..0000000000000000000000000000000000000000 --- a/spaces/bradarrML/diffuse-the-rest/build/_app/immutable/start-449f9cce.js +++ /dev/null @@ -1 +0,0 @@ -var We=Object.defineProperty;var Je=(s,e,n)=>e in s?We(s,e,{enumerable:!0,configurable:!0,writable:!0,value:n}):s[e]=n;var ue=(s,e,n)=>(Je(s,typeof e!="symbol"?e+"":e,n),n);import{S as He,i as Fe,s as Ge,a as Me,e as I,c as Ye,b as V,g as M,t as D,d as Y,f as T,h as z,j as Xe,o as _e,k as Ze,l as Qe,m as xe,n as de,p as J,q as et,r as tt,u as nt,v as B,w as ee,x as K,y as W,z as Ne}from"./chunks/index-032ac624.js";import{g as Ie,f as De,a as Te,s as G,b as ge,i as rt,c as at}from"./chunks/singletons-d6c43dab.js";class re{constructor(e,n){ue(this,"name","HttpError");ue(this,"stack");this.status=e,this.message=n!=null?n:`Error: ${e}`}toString(){return this.message}}class qe{constructor(e,n){this.status=e,this.location=n}}function st(s,e){return s==="/"||e==="ignore"?s:e==="never"?s.endsWith("/")?s.slice(0,-1):s:e==="always"&&!s.endsWith("/")?s+"/":s}function it(s){for(const e in s)s[e]=s[e].replace(/%23/g,"#").replace(/%3[Bb]/g,";").replace(/%2[Cc]/g,",").replace(/%2[Ff]/g,"/").replace(/%3[Ff]/g,"?").replace(/%3[Aa]/g,":").replace(/%40/g,"@").replace(/%26/g,"&").replace(/%3[Dd]/g,"=").replace(/%2[Bb]/g,"+").replace(/%24/g,"$");return s}class ot extends URL{get hash(){throw new Error("url.hash is inaccessible from load. Consider accessing hash from the page store within the script tag of your component.")}}function lt(s){let e=5381,n=s.length;if(typeof s=="string")for(;n;)e=e*33^s.charCodeAt(--n);else for(;n;)e=e*33^s[--n];return(e>>>0).toString(36)}const ae=window.fetch;function ct(s,e){let i=`script[sveltekit\\:data-type="data"][sveltekit\\:data-url=${JSON.stringify(typeof s=="string"?s:s.url)}]`;e&&typeof e.body=="string"&&(i+=`[sveltekit\\:data-body="${lt(e.body)}"]`);const r=document.querySelector(i);if(r&&r.textContent){const{body:u,...t}=JSON.parse(r.textContent);return Promise.resolve(new Response(u,t))}return ae(s,e)}const ft=/^(\.\.\.)?(\w+)(?:=(\w+))?$/;function ut(s){const e=[],n=[];let i=!0;return{pattern:s===""?/^\/$/:new RegExp(`^${s.split(/(?:@[a-zA-Z0-9_-]+)?(?:\/|$)/).map((u,t,l)=>{const d=decodeURIComponent(u),p=/^\[\.\.\.(\w+)(?:=(\w+))?\]$/.exec(d);if(p)return e.push(p[1]),n.push(p[2]),"(?:/(.*))?";const g=t===l.length-1;return d&&"/"+d.split(/\[(.+?)\]/).map((E,P)=>{if(P%2){const $=ft.exec(E);if(!$)throw new Error(`Invalid param: ${E}. Params and matcher names can only have underscores and alphanumeric characters.`);const[,O,Z,Q]=$;return e.push(Z),n.push(Q),O?"(.*?)":"([^/]+?)"}return g&&E.includes(".")&&(i=!1),E.normalize().replace(/%5[Bb]/g,"[").replace(/%5[Dd]/g,"]").replace(/#/g,"%23").replace(/\?/g,"%3F").replace(/[.*+?^${}()|[\]\\]/g,"\\$&")}).join("")}).join("")}${i?"/?":""}$`),names:e,types:n}}function dt(s,e,n,i){const r={};for(let u=0;u{const{pattern:d,names:p,types:g}=ut(i),E={id:i,exec:P=>{const $=d.exec(P);if($)return dt($,p,g,n)},errors:r.map(P=>s[P]),layouts:u.map(P=>s[P]),leaf:s[t],uses_server_data:!!l};return E.errors.length=E.layouts.length=Math.max(E.errors.length,E.layouts.length),E})}function ht(s,e){return new re(s,e)}function mt(s){let e,n,i;var r=s[0][0];function u(t){return{props:{data:t[1],errors:t[4]}}}return r&&(e=new r(u(s))),{c(){e&&B(e.$$.fragment),n=I()},l(t){e&&ee(e.$$.fragment,t),n=I()},m(t,l){e&&K(e,t,l),V(t,n,l),i=!0},p(t,l){const d={};if(l&2&&(d.data=t[1]),l&16&&(d.errors=t[4]),r!==(r=t[0][0])){if(e){M();const p=e;D(p.$$.fragment,1,0,()=>{W(p,1)}),Y()}r?(e=new r(u(t)),B(e.$$.fragment),T(e.$$.fragment,1),K(e,n.parentNode,n)):e=null}else r&&e.$set(d)},i(t){i||(e&&T(e.$$.fragment,t),i=!0)},o(t){e&&D(e.$$.fragment,t),i=!1},d(t){t&&z(n),e&&W(e,t)}}}function _t(s){let e,n,i;var r=s[0][0];function u(t){return{props:{data:t[1],$$slots:{default:[yt]},$$scope:{ctx:t}}}}return r&&(e=new r(u(s))),{c(){e&&B(e.$$.fragment),n=I()},l(t){e&&ee(e.$$.fragment,t),n=I()},m(t,l){e&&K(e,t,l),V(t,n,l),i=!0},p(t,l){const d={};if(l&2&&(d.data=t[1]),l&1053&&(d.$$scope={dirty:l,ctx:t}),r!==(r=t[0][0])){if(e){M();const p=e;D(p.$$.fragment,1,0,()=>{W(p,1)}),Y()}r?(e=new r(u(t)),B(e.$$.fragment),T(e.$$.fragment,1),K(e,n.parentNode,n)):e=null}else r&&e.$set(d)},i(t){i||(e&&T(e.$$.fragment,t),i=!0)},o(t){e&&D(e.$$.fragment,t),i=!1},d(t){t&&z(n),e&&W(e,t)}}}function gt(s){let e,n,i;var r=s[0][1];function u(t){return{props:{data:t[2],errors:t[4]}}}return r&&(e=new r(u(s))),{c(){e&&B(e.$$.fragment),n=I()},l(t){e&&ee(e.$$.fragment,t),n=I()},m(t,l){e&&K(e,t,l),V(t,n,l),i=!0},p(t,l){const d={};if(l&4&&(d.data=t[2]),l&16&&(d.errors=t[4]),r!==(r=t[0][1])){if(e){M();const p=e;D(p.$$.fragment,1,0,()=>{W(p,1)}),Y()}r?(e=new r(u(t)),B(e.$$.fragment),T(e.$$.fragment,1),K(e,n.parentNode,n)):e=null}else r&&e.$set(d)},i(t){i||(e&&T(e.$$.fragment,t),i=!0)},o(t){e&&D(e.$$.fragment,t),i=!1},d(t){t&&z(n),e&&W(e,t)}}}function wt(s){let e,n,i;var r=s[0][1];function u(t){return{props:{data:t[2],$$slots:{default:[bt]},$$scope:{ctx:t}}}}return r&&(e=new r(u(s))),{c(){e&&B(e.$$.fragment),n=I()},l(t){e&&ee(e.$$.fragment,t),n=I()},m(t,l){e&&K(e,t,l),V(t,n,l),i=!0},p(t,l){const d={};if(l&4&&(d.data=t[2]),l&1033&&(d.$$scope={dirty:l,ctx:t}),r!==(r=t[0][1])){if(e){M();const p=e;D(p.$$.fragment,1,0,()=>{W(p,1)}),Y()}r?(e=new r(u(t)),B(e.$$.fragment),T(e.$$.fragment,1),K(e,n.parentNode,n)):e=null}else r&&e.$set(d)},i(t){i||(e&&T(e.$$.fragment,t),i=!0)},o(t){e&&D(e.$$.fragment,t),i=!1},d(t){t&&z(n),e&&W(e,t)}}}function bt(s){let e,n,i;var r=s[0][2];function u(t){return{props:{data:t[3]}}}return r&&(e=new r(u(s))),{c(){e&&B(e.$$.fragment),n=I()},l(t){e&&ee(e.$$.fragment,t),n=I()},m(t,l){e&&K(e,t,l),V(t,n,l),i=!0},p(t,l){const d={};if(l&8&&(d.data=t[3]),r!==(r=t[0][2])){if(e){M();const p=e;D(p.$$.fragment,1,0,()=>{W(p,1)}),Y()}r?(e=new r(u(t)),B(e.$$.fragment),T(e.$$.fragment,1),K(e,n.parentNode,n)):e=null}else r&&e.$set(d)},i(t){i||(e&&T(e.$$.fragment,t),i=!0)},o(t){e&&D(e.$$.fragment,t),i=!1},d(t){t&&z(n),e&&W(e,t)}}}function yt(s){let e,n,i,r;const u=[wt,gt],t=[];function l(d,p){return d[0][2]?0:1}return e=l(s),n=t[e]=u[e](s),{c(){n.c(),i=I()},l(d){n.l(d),i=I()},m(d,p){t[e].m(d,p),V(d,i,p),r=!0},p(d,p){let g=e;e=l(d),e===g?t[e].p(d,p):(M(),D(t[g],1,1,()=>{t[g]=null}),Y(),n=t[e],n?n.p(d,p):(n=t[e]=u[e](d),n.c()),T(n,1),n.m(i.parentNode,i))},i(d){r||(T(n),r=!0)},o(d){D(n),r=!1},d(d){t[e].d(d),d&&z(i)}}}function ze(s){let e,n=s[6]&&Ve(s);return{c(){e=Ze("div"),n&&n.c(),this.h()},l(i){e=Qe(i,"DIV",{id:!0,"aria-live":!0,"aria-atomic":!0,style:!0});var r=xe(e);n&&n.l(r),r.forEach(z),this.h()},h(){de(e,"id","svelte-announcer"),de(e,"aria-live","assertive"),de(e,"aria-atomic","true"),J(e,"position","absolute"),J(e,"left","0"),J(e,"top","0"),J(e,"clip","rect(0 0 0 0)"),J(e,"clip-path","inset(50%)"),J(e,"overflow","hidden"),J(e,"white-space","nowrap"),J(e,"width","1px"),J(e,"height","1px")},m(i,r){V(i,e,r),n&&n.m(e,null)},p(i,r){i[6]?n?n.p(i,r):(n=Ve(i),n.c(),n.m(e,null)):n&&(n.d(1),n=null)},d(i){i&&z(e),n&&n.d()}}}function Ve(s){let e;return{c(){e=et(s[7])},l(n){e=tt(n,s[7])},m(n,i){V(n,e,i)},p(n,i){i&128&&nt(e,n[7])},d(n){n&&z(e)}}}function vt(s){let e,n,i,r,u;const t=[_t,mt],l=[];function d(g,E){return g[0][1]?0:1}e=d(s),n=l[e]=t[e](s);let p=s[5]&&ze(s);return{c(){n.c(),i=Me(),p&&p.c(),r=I()},l(g){n.l(g),i=Ye(g),p&&p.l(g),r=I()},m(g,E){l[e].m(g,E),V(g,i,E),p&&p.m(g,E),V(g,r,E),u=!0},p(g,[E]){let P=e;e=d(g),e===P?l[e].p(g,E):(M(),D(l[P],1,1,()=>{l[P]=null}),Y(),n=l[e],n?n.p(g,E):(n=l[e]=t[e](g),n.c()),T(n,1),n.m(i.parentNode,i)),g[5]?p?p.p(g,E):(p=ze(g),p.c(),p.m(r.parentNode,r)):p&&(p.d(1),p=null)},i(g){u||(T(n),u=!0)},o(g){D(n),u=!1},d(g){l[e].d(g),g&&z(i),p&&p.d(g),g&&z(r)}}}function kt(s,e,n){let{stores:i}=e,{page:r}=e,{components:u}=e,{data_0:t=null}=e,{data_1:l=null}=e,{data_2:d=null}=e,{errors:p}=e;Xe(i.page.notify);let g=!1,E=!1,P=null;return _e(()=>{const $=i.page.subscribe(()=>{g&&(n(6,E=!0),n(7,P=document.title||"untitled page"))});return n(5,g=!0),$}),s.$$set=$=>{"stores"in $&&n(8,i=$.stores),"page"in $&&n(9,r=$.page),"components"in $&&n(0,u=$.components),"data_0"in $&&n(1,t=$.data_0),"data_1"in $&&n(2,l=$.data_1),"data_2"in $&&n(3,d=$.data_2),"errors"in $&&n(4,p=$.errors)},s.$$.update=()=>{s.$$.dirty&768&&i.page.set(r)},[u,t,l,d,p,g,E,P,i,r]}class $t extends He{constructor(e){super(),Fe(this,e,kt,vt,Ge,{stores:8,page:9,components:0,data_0:1,data_1:2,data_2:3,errors:4})}}const Et=function(){const e=document.createElement("link").relList;return e&&e.supports&&e.supports("modulepreload")?"modulepreload":"preload"}(),St=function(s,e){return new URL(s,e).href},Be={},pe=function(e,n,i){return!n||n.length===0?e():Promise.all(n.map(r=>{if(r=St(r,i),r in Be)return;Be[r]=!0;const u=r.endsWith(".css"),t=u?'[rel="stylesheet"]':"";if(document.querySelector(`link[href="${r}"]${t}`))return;const l=document.createElement("link");if(l.rel=u?"stylesheet":Et,u||(l.as="script",l.crossOrigin=""),l.href=r,document.head.appendChild(l),u)return new Promise((d,p)=>{l.addEventListener("load",d),l.addEventListener("error",()=>p(new Error(`Unable to preload CSS for ${r}`)))})})).then(()=>e())},Lt={},se=[()=>pe(()=>import("./chunks/0-be487481.js"),["chunks/0-be487481.js","components/pages/_layout.svelte-f7e87a93.js","assets/+layout-7c2f4ad7.css","chunks/index-032ac624.js"],import.meta.url),()=>pe(()=>import("./chunks/1-e860cfcd.js"),["chunks/1-e860cfcd.js","components/error.svelte-c15ad458.js","chunks/index-032ac624.js","chunks/singletons-d6c43dab.js"],import.meta.url),()=>pe(()=>import("./chunks/2-6ab63caf.js"),["chunks/2-6ab63caf.js","components/pages/_page.svelte-1525ec40.js","assets/+page-376b236d.css","chunks/index-032ac624.js"],import.meta.url)],Rt={"":[[1],[0],2]},Ke="sveltekit:scroll",H="sveltekit:index",he=pt(se,Rt,Lt),we=se[0],be=se[1];we();be();let x={};try{x=JSON.parse(sessionStorage[Ke])}catch{}function me(s){x[s]=ge()}function Ut({target:s,base:e,trailing_slash:n}){var Ue;const i=[],r={id:null,promise:null},u={before_navigate:[],after_navigate:[]};let t={branch:[],error:null,session_id:0,url:null},l=!1,d=!0,p=!1,g=1,E=null,P,$=!0,O=(Ue=history.state)==null?void 0:Ue[H];O||(O=Date.now(),history.replaceState({...history.state,[H]:O},"",location.href));const Z=x[O];Z&&(history.scrollRestoration="manual",scrollTo(Z.x,Z.y));let Q=!1,ie,ye;async function ve(a,{noscroll:f=!1,replaceState:h=!1,keepfocus:o=!1,state:c={}},y){if(typeof a=="string"&&(a=new URL(a,Ie(document))),$)return ce({url:a,scroll:f?ge():null,keepfocus:o,redirect_chain:y,details:{state:c,replaceState:h},accepted:()=>{},blocked:()=>{}});await F(a)}async function ke(a){const f=Re(a);if(!f)throw new Error("Attempted to prefetch a URL that does not belong to this app");return r.promise=Le(f),r.id=f.id,r.promise}async function $e(a,f,h,o){var b,L,U;const c=Re(a),y=ye={};let m=c&&await Le(c);if(!m&&a.origin===location.origin&&a.pathname===location.pathname&&(m=await ne({status:404,error:new Error(`Not found: ${a.pathname}`),url:a,routeId:null})),!m)return await F(a),!1;if(a=(c==null?void 0:c.url)||a,ye!==y)return!1;if(i.length=0,m.type==="redirect")if(f.length>10||f.includes(a.pathname))m=await ne({status:500,error:new Error("Redirect loop"),url:a,routeId:null});else return $?ve(new URL(m.location,a).href,{},[...f,a.pathname]):await F(new URL(m.location,location.href)),!1;else((L=(b=m.props)==null?void 0:b.page)==null?void 0:L.status)>=400&&await G.updated.check()&&await F(a);if(p=!0,h&&h.details){const{details:k}=h,R=k.replaceState?0:1;k.state[H]=O+=R,history[k.replaceState?"replaceState":"pushState"](k.state,"",a)}if(l?(t=m.state,m.props.page&&(m.props.page.url=a),P.$set(m.props)):Ee(m),h){const{scroll:k,keepfocus:R}=h;if(!R){const S=document.body,A=S.getAttribute("tabindex");S.tabIndex=-1,S.focus({preventScroll:!0}),setTimeout(()=>{var w;(w=getSelection())==null||w.removeAllRanges()}),A!==null?S.setAttribute("tabindex",A):S.removeAttribute("tabindex")}if(await Ne(),d){const S=a.hash&&document.getElementById(a.hash.slice(1));k?scrollTo(k.x,k.y):S?S.scrollIntoView():scrollTo(0,0)}}else await Ne();r.promise=null,r.id=null,d=!0,m.props.page&&(ie=m.props.page);const v=m.state.branch[m.state.branch.length-1];$=((U=v==null?void 0:v.node.shared)==null?void 0:U.router)!==!1,o&&o(),p=!1}function Ee(a){t=a.state;const f=document.querySelector("style[data-sveltekit]");if(f&&f.remove(),ie=a.props.page,P=new $t({target:s,props:{...a.props,stores:G},hydrate:!0}),$){const h={from:null,to:new URL(location.href)};u.after_navigate.forEach(o=>o(h))}l=!0}async function te({url:a,params:f,branch:h,status:o,error:c,routeId:y,validation_errors:m}){const v=h.filter(Boolean),b={type:"loaded",state:{url:a,params:f,branch:h,error:c,session_id:g},props:{components:v.map(R=>R.node.component),errors:m}};let L={},U=!1;for(let R=0;RS===v[R]))&&(b.props[`data_${R}`]=L,U=!0);if(!t.url||a.href!==t.url.href||t.error!==c||U){b.props.page={error:c,params:f,routeId:y,status:o,url:a,data:L};const R=(S,A)=>{Object.defineProperty(b.props.page,S,{get:()=>{throw new Error(`$page.${S} has been replaced by $page.url.${A}`)}})};R("origin","origin"),R("path","pathname"),R("query","searchParams")}return b}async function oe({loader:a,parent:f,url:h,params:o,routeId:c,server_data_node:y}){var L,U,k,R,S;let m=null;const v={dependencies:new Set,params:new Set,parent:!1,url:!1},b=await a();if((L=b.shared)!=null&&L.load){let A=function(..._){for(const q of _){const{href:N}=new URL(q,h);v.dependencies.add(N)}};const w={};for(const _ in o)Object.defineProperty(w,_,{get(){return v.params.add(_),o[_]},enumerable:!0});const C=new ot(h),j={routeId:c,params:w,data:(U=y==null?void 0:y.data)!=null?U:null,get url(){return v.url=!0,C},async fetch(_,q){let N;typeof _=="string"?N=_:(N=_.url,q={body:_.method==="GET"||_.method==="HEAD"?void 0:await _.blob(),cache:_.cache,credentials:_.credentials,headers:_.headers,integrity:_.integrity,keepalive:_.keepalive,method:_.method,mode:_.mode,redirect:_.redirect,referrer:_.referrer,referrerPolicy:_.referrerPolicy,signal:_.signal,...q});const X=new URL(N,h).href;return A(X),l?ae(X,q):ct(N,q)},setHeaders:()=>{},depends:A,parent(){return v.parent=!0,f()}};Object.defineProperties(j,{props:{get(){throw new Error("@migration task: Replace `props` with `data` stuff https://github.com/sveltejs/kit/discussions/5774#discussioncomment-3292693")},enumerable:!1},session:{get(){throw new Error("session is no longer available. See https://github.com/sveltejs/kit/discussions/5883")},enumerable:!1},stuff:{get(){throw new Error("@migration task: Remove stuff https://github.com/sveltejs/kit/discussions/5774#discussioncomment-3292693")},enumerable:!1}}),m=(k=await b.shared.load.call(null,j))!=null?k:null}return{node:b,loader:a,server:y,shared:(R=b.shared)!=null&&R.load?{type:"data",data:m,uses:v}:null,data:(S=m!=null?m:y==null?void 0:y.data)!=null?S:null}}function Se(a,f,h){if(!h)return!1;if(h.parent&&f||a.url&&h.url)return!0;for(const o of a.params)if(h.params.has(o))return!0;for(const o of h.dependencies)if(i.some(c=>c(o)))return!0;return!1}function le(a){var f,h;return(a==null?void 0:a.type)==="data"?{type:"data",data:a.data,uses:{dependencies:new Set((f=a.uses.dependencies)!=null?f:[]),params:new Set((h=a.uses.params)!=null?h:[]),parent:!!a.uses.parent,url:!!a.uses.url}}:null}async function Le({id:a,url:f,params:h,route:o}){if(r.id===a&&r.promise)return r.promise;const{errors:c,layouts:y,leaf:m}=o,v=t.url&&{url:a!==t.url.pathname+t.url.search,params:Object.keys(h).filter(w=>t.params[w]!==h[w])};[...c,...y,m].forEach(w=>w==null?void 0:w().catch(()=>{}));const b=[...y,m];let L=null;const U=b.reduce((w,C,j)=>{var N;const _=t.branch[j],q=C&&((_==null?void 0:_.loader)!==C||Se(v,w.some(Boolean),(N=_.server)==null?void 0:N.uses));return w.push(q),w},[]);if(o.uses_server_data&&U.some(Boolean)){try{const w=await ae(`${f.pathname}${f.pathname.endsWith("/")?"":"/"}__data.json${f.search}`,{headers:{"x-sveltekit-invalidated":U.map(C=>C?"1":"").join(",")}});if(L=await w.json(),!w.ok)throw L}catch{F(f);return}if(L.type==="redirect")return L}const k=L==null?void 0:L.nodes;let R=!1;const S=b.map(async(w,C)=>{var X,je,Pe,Ae;if(!w)return;const j=t.branch[C],_=(X=k==null?void 0:k[C])!=null?X:null;if((!_||_.type==="skip")&&w===(j==null?void 0:j.loader)&&!Se(v,R,(je=j.shared)==null?void 0:je.uses))return j;if(R=!0,(_==null?void 0:_.type)==="error")throw _.httperror?ht(_.httperror.status,_.httperror.message):_.error;return oe({loader:w,url:f,params:h,routeId:o.id,parent:async()=>{var Ce;const Oe={};for(let fe=0;fe{});const A=[];for(let w=0;wPromise.resolve({}),server_data_node:le(m)}),b={node:await be(),loader:be,shared:null,server:null,data:null};return await te({url:h,params:c,branch:[v,b],status:a,error:f,routeId:o})}function Re(a){if(a.origin!==location.origin||!a.pathname.startsWith(e))return;const f=decodeURI(a.pathname.slice(e.length)||"/");for(const h of he){const o=h.exec(f);if(o){const c=new URL(a.origin+st(a.pathname,n)+a.search+a.hash);return{id:c.pathname+c.search,route:h,params:it(o),url:c}}}}async function ce({url:a,scroll:f,keepfocus:h,redirect_chain:o,details:c,accepted:y,blocked:m}){const v=t.url;let b=!1;const L={from:v,to:a,cancel:()=>b=!0};if(u.before_navigate.forEach(U=>U(L)),b){m();return}me(O),y(),l&&G.navigating.set({from:t.url,to:a}),await $e(a,o,{scroll:f,keepfocus:h,details:c},()=>{const U={from:v,to:a};u.after_navigate.forEach(k=>k(U)),G.navigating.set(null)})}function F(a){return location.href=a.href,new Promise(()=>{})}return{after_navigate:a=>{_e(()=>(u.after_navigate.push(a),()=>{const f=u.after_navigate.indexOf(a);u.after_navigate.splice(f,1)}))},before_navigate:a=>{_e(()=>(u.before_navigate.push(a),()=>{const f=u.before_navigate.indexOf(a);u.before_navigate.splice(f,1)}))},disable_scroll_handling:()=>{(p||!l)&&(d=!1)},goto:(a,f={})=>ve(a,f,[]),invalidate:a=>{var f,h;if(a===void 0){for(const o of t.branch)(f=o==null?void 0:o.server)==null||f.uses.dependencies.add(""),(h=o==null?void 0:o.shared)==null||h.uses.dependencies.add("");i.push(()=>!0)}else if(typeof a=="function")i.push(a);else{const{href:o}=new URL(a,location.href);i.push(c=>c===o)}return E||(E=Promise.resolve().then(async()=>{await $e(new URL(location.href),[]),E=null})),E},prefetch:async a=>{const f=new URL(a,Ie(document));await ke(f)},prefetch_routes:async a=>{const h=(a?he.filter(o=>a.some(c=>o.exec(c))):he).map(o=>Promise.all([...o.layouts,o.leaf].map(c=>c==null?void 0:c())));await Promise.all(h)},_start_router:()=>{history.scrollRestoration="manual",addEventListener("beforeunload",o=>{let c=!1;const y={from:t.url,to:null,cancel:()=>c=!0};u.before_navigate.forEach(m=>m(y)),c?(o.preventDefault(),o.returnValue=""):history.scrollRestoration="auto"}),addEventListener("visibilitychange",()=>{if(document.visibilityState==="hidden"){me(O);try{sessionStorage[Ke]=JSON.stringify(x)}catch{}}});const a=o=>{const c=De(o);c&&c.href&&c.hasAttribute("sveltekit:prefetch")&&ke(Te(c))};let f;const h=o=>{clearTimeout(f),f=setTimeout(()=>{var c;(c=o.target)==null||c.dispatchEvent(new CustomEvent("sveltekit:trigger_prefetch",{bubbles:!0}))},20)};addEventListener("touchstart",a),addEventListener("mousemove",h),addEventListener("sveltekit:trigger_prefetch",a),addEventListener("click",o=>{if(!$||o.button||o.which!==1||o.metaKey||o.ctrlKey||o.shiftKey||o.altKey||o.defaultPrevented)return;const c=De(o);if(!c||!c.href)return;const y=c instanceof SVGAElement,m=Te(c);if(!y&&!(m.protocol==="https:"||m.protocol==="http:"))return;const v=(c.getAttribute("rel")||"").split(/\s+/);if(c.hasAttribute("download")||v.includes("external")||c.hasAttribute("sveltekit:reload")||(y?c.target.baseVal:c.target))return;const[b,L]=m.href.split("#");if(L!==void 0&&b===location.href.split("#")[0]){Q=!0,me(O),G.page.set({...ie,url:m}),G.page.notify();return}ce({url:m,scroll:c.hasAttribute("sveltekit:noscroll")?ge():null,keepfocus:!1,redirect_chain:[],details:{state:{},replaceState:m.href===location.href},accepted:()=>o.preventDefault(),blocked:()=>o.preventDefault()})}),addEventListener("popstate",o=>{if(o.state&&$){if(o.state[H]===O)return;ce({url:new URL(location.href),scroll:x[o.state[H]],keepfocus:!1,redirect_chain:[],details:null,accepted:()=>{O=o.state[H]},blocked:()=>{const c=O-o.state[H];history.go(c)}})}}),addEventListener("hashchange",()=>{Q&&(Q=!1,history.replaceState({...history.state,[H]:++O},"",location.href))});for(const o of document.querySelectorAll("link"))o.rel==="icon"&&(o.href=o.href);addEventListener("pageshow",o=>{o.persisted&&G.navigating.set(null)})},_hydrate:async({status:a,error:f,node_ids:h,params:o,routeId:c})=>{const y=new URL(location.href);let m;try{const v=(k,R)=>{const S=document.querySelector(`script[sveltekit\\:data-type="${k}"]`);return S!=null&&S.textContent?JSON.parse(S.textContent):R},b=v("server_data",[]),L=v("validation_errors",void 0),U=h.map(async(k,R)=>oe({loader:se[k],url:y,params:o,routeId:c,parent:async()=>{const S={};for(let A=0;A 0 - The number of analysis frames - - hop_length : int > 0 - The number of samples to advance between frames - - win_length : [optional] - The length of the window function. By default, this matches `n_fft`. - - n_fft : int > 0 - The length of each analysis frame. - - dtype : np.dtype - The data type of the output - - Returns - ------- - wss : np.ndarray, shape=`(n_fft + hop_length * (n_frames - 1))` - The sum-squared envelope of the window function - """ - if win_length is None: - win_length = n_fft - - n = n_fft + hop_length * (n_frames - 1) - x = np.zeros(n, dtype=dtype) - - # Compute the squared window at the desired length - win_sq = get_window(window, win_length, fftbins=True) - win_sq = librosa_util.normalize(win_sq, norm=norm) ** 2 - win_sq = librosa_util.pad_center(win_sq, n_fft) - - # Fill the envelope - for i in range(n_frames): - sample = i * hop_length - x[sample : min(n, sample + n_fft)] += win_sq[: max(0, min(n_fft, n - sample))] - return x - - -def griffin_lim(magnitudes, stft_fn, n_iters=30): - """ - PARAMS - ------ - magnitudes: spectrogram magnitudes - stft_fn: STFT class with transform (STFT) and inverse (ISTFT) methods - """ - - angles = np.angle(np.exp(2j * np.pi * np.random.rand(*magnitudes.size()))) - angles = angles.astype(np.float32) - angles = torch.autograd.Variable(torch.from_numpy(angles)) - signal = stft_fn.inverse(magnitudes, angles).squeeze(1) - - for i in range(n_iters): - _, angles = stft_fn.transform(signal) - signal = stft_fn.inverse(magnitudes, angles).squeeze(1) - return signal - - -def dynamic_range_compression(x, normalize_fun=torch.log, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return normalize_fun(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C diff --git a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/variational_autoencoder/modules.py b/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/variational_autoencoder/modules.py deleted file mode 100644 index 6b2c3dca2d168fb5fbaff5acc4b5a06280a496a7..0000000000000000000000000000000000000000 --- a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/variational_autoencoder/modules.py +++ /dev/null @@ -1,1064 +0,0 @@ -# pytorch_diffusion + derived encoder decoder -import math -import torch -import torch.nn as nn -import numpy as np -from einops import rearrange - -from audioldm.utils import instantiate_from_config -from audioldm.latent_diffusion.attention import LinearAttention - -def get_timestep_embedding(timesteps, embedding_dim): - """ - This matches the implementation in Denoising Diffusion Probabilistic Models: - From Fairseq. - Build sinusoidal embeddings. - This matches the implementation in tensor2tensor, but differs slightly - from the description in Section 3.5 of "Attention Is All You Need". - """ - assert len(timesteps.shape) == 1 - - half_dim = embedding_dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, dtype=torch.float32) * -emb) - emb = emb.to(device=timesteps.device) - emb = timesteps.float()[:, None] * emb[None, :] - emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1) - if embedding_dim % 2 == 1: # zero pad - emb = torch.nn.functional.pad(emb, (0, 1, 0, 0)) - return emb - -def nonlinearity(x): - # swish - return x * torch.sigmoid(x) - - -def Normalize(in_channels, num_groups=32): - return torch.nn.GroupNorm( - num_groups=num_groups, num_channels=in_channels, eps=1e-6, affine=True - ) - - -class Upsample(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - self.conv = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=3, stride=1, padding=1 - ) - - def forward(self, x): - x = torch.nn.functional.interpolate(x, scale_factor=2.0, mode="nearest") - if self.with_conv: - x = self.conv(x) - return x - - -class UpsampleTimeStride4(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - self.conv = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=5, stride=1, padding=2 - ) - - def forward(self, x): - x = torch.nn.functional.interpolate(x, scale_factor=(4.0, 2.0), mode="nearest") - if self.with_conv: - x = self.conv(x) - return x - - -class Downsample(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - # Do time downsampling here - # no asymmetric padding in torch conv, must do it ourselves - self.conv = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=3, stride=2, padding=0 - ) - - def forward(self, x): - if self.with_conv: - pad = (0, 1, 0, 1) - x = torch.nn.functional.pad(x, pad, mode="constant", value=0) - x = self.conv(x) - else: - x = torch.nn.functional.avg_pool2d(x, kernel_size=2, stride=2) - return x - - -class DownsampleTimeStride4(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - # Do time downsampling here - # no asymmetric padding in torch conv, must do it ourselves - self.conv = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=5, stride=(4, 2), padding=1 - ) - - def forward(self, x): - if self.with_conv: - pad = (0, 1, 0, 1) - x = torch.nn.functional.pad(x, pad, mode="constant", value=0) - x = self.conv(x) - else: - x = torch.nn.functional.avg_pool2d(x, kernel_size=(4, 2), stride=(4, 2)) - return x - - -class ResnetBlock(nn.Module): - def __init__( - self, - *, - in_channels, - out_channels=None, - conv_shortcut=False, - dropout, - temb_channels=512, - ): - super().__init__() - self.in_channels = in_channels - out_channels = in_channels if out_channels is None else out_channels - self.out_channels = out_channels - self.use_conv_shortcut = conv_shortcut - - self.norm1 = Normalize(in_channels) - self.conv1 = torch.nn.Conv2d( - in_channels, out_channels, kernel_size=3, stride=1, padding=1 - ) - if temb_channels > 0: - self.temb_proj = torch.nn.Linear(temb_channels, out_channels) - self.norm2 = Normalize(out_channels) - self.dropout = torch.nn.Dropout(dropout) - self.conv2 = torch.nn.Conv2d( - out_channels, out_channels, kernel_size=3, stride=1, padding=1 - ) - if self.in_channels != self.out_channels: - if self.use_conv_shortcut: - self.conv_shortcut = torch.nn.Conv2d( - in_channels, out_channels, kernel_size=3, stride=1, padding=1 - ) - else: - self.nin_shortcut = torch.nn.Conv2d( - in_channels, out_channels, kernel_size=1, stride=1, padding=0 - ) - - def forward(self, x, temb): - h = x - h = self.norm1(h) - h = nonlinearity(h) - h = self.conv1(h) - - if temb is not None: - h = h + self.temb_proj(nonlinearity(temb))[:, :, None, None] - - h = self.norm2(h) - h = nonlinearity(h) - h = self.dropout(h) - h = self.conv2(h) - - if self.in_channels != self.out_channels: - if self.use_conv_shortcut: - x = self.conv_shortcut(x) - else: - x = self.nin_shortcut(x) - - return x + h - - -class LinAttnBlock(LinearAttention): - """to match AttnBlock usage""" - - def __init__(self, in_channels): - super().__init__(dim=in_channels, heads=1, dim_head=in_channels) - - -class AttnBlock(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=1, stride=1, padding=0 - ) - self.k = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=1, stride=1, padding=0 - ) - self.v = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=1, stride=1, padding=0 - ) - self.proj_out = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=1, stride=1, padding=0 - ) - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b, c, h, w = q.shape - q = q.reshape(b, c, h * w).contiguous() - q = q.permute(0, 2, 1).contiguous() # b,hw,c - k = k.reshape(b, c, h * w).contiguous() # b,c,hw - w_ = torch.bmm(q, k).contiguous() # b,hw,hw w[b,i,j]=sum_c q[b,i,c]k[b,c,j] - w_ = w_ * (int(c) ** (-0.5)) - w_ = torch.nn.functional.softmax(w_, dim=2) - - # attend to values - v = v.reshape(b, c, h * w).contiguous() - w_ = w_.permute(0, 2, 1).contiguous() # b,hw,hw (first hw of k, second of q) - h_ = torch.bmm( - v, w_ - ).contiguous() # b, c,hw (hw of q) h_[b,c,j] = sum_i v[b,c,i] w_[b,i,j] - h_ = h_.reshape(b, c, h, w).contiguous() - - h_ = self.proj_out(h_) - - return x + h_ - - -def make_attn(in_channels, attn_type="vanilla"): - assert attn_type in ["vanilla", "linear", "none"], f"attn_type {attn_type} unknown" - # print(f"making attention of type '{attn_type}' with {in_channels} in_channels") - if attn_type == "vanilla": - return AttnBlock(in_channels) - elif attn_type == "none": - return nn.Identity(in_channels) - else: - return LinAttnBlock(in_channels) - - -class Model(nn.Module): - def __init__( - self, - *, - ch, - out_ch, - ch_mult=(1, 2, 4, 8), - num_res_blocks, - attn_resolutions, - dropout=0.0, - resamp_with_conv=True, - in_channels, - resolution, - use_timestep=True, - use_linear_attn=False, - attn_type="vanilla", - ): - super().__init__() - if use_linear_attn: - attn_type = "linear" - self.ch = ch - self.temb_ch = self.ch * 4 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - - self.use_timestep = use_timestep - if self.use_timestep: - # timestep embedding - self.temb = nn.Module() - self.temb.dense = nn.ModuleList( - [ - torch.nn.Linear(self.ch, self.temb_ch), - torch.nn.Linear(self.temb_ch, self.temb_ch), - ] - ) - - # downsampling - self.conv_in = torch.nn.Conv2d( - in_channels, self.ch, kernel_size=3, stride=1, padding=1 - ) - - curr_res = resolution - in_ch_mult = (1,) + tuple(ch_mult) - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch * in_ch_mult[i_level] - block_out = ch * ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append( - ResnetBlock( - in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout, - ) - ) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions - 1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock( - in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout, - ) - self.mid.attn_1 = make_attn(block_in, attn_type=attn_type) - self.mid.block_2 = ResnetBlock( - in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout, - ) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch * ch_mult[i_level] - skip_in = ch * ch_mult[i_level] - for i_block in range(self.num_res_blocks + 1): - if i_block == self.num_res_blocks: - skip_in = ch * in_ch_mult[i_level] - block.append( - ResnetBlock( - in_channels=block_in + skip_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout, - ) - ) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d( - block_in, out_ch, kernel_size=3, stride=1, padding=1 - ) - - def forward(self, x, t=None, context=None): - # assert x.shape[2] == x.shape[3] == self.resolution - if context is not None: - # assume aligned context, cat along channel axis - x = torch.cat((x, context), dim=1) - if self.use_timestep: - # timestep embedding - assert t is not None - temb = get_timestep_embedding(t, self.ch) - temb = self.temb.dense[0](temb) - temb = nonlinearity(temb) - temb = self.temb.dense[1](temb) - else: - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions - 1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks + 1): - h = self.up[i_level].block[i_block]( - torch.cat([h, hs.pop()], dim=1), temb - ) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - def get_last_layer(self): - return self.conv_out.weight - - -class Encoder(nn.Module): - def __init__( - self, - *, - ch, - out_ch, - ch_mult=(1, 2, 4, 8), - num_res_blocks, - attn_resolutions, - dropout=0.0, - resamp_with_conv=True, - in_channels, - resolution, - z_channels, - double_z=True, - use_linear_attn=False, - attn_type="vanilla", - downsample_time_stride4_levels=[], - **ignore_kwargs, - ): - super().__init__() - if use_linear_attn: - attn_type = "linear" - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - self.downsample_time_stride4_levels = downsample_time_stride4_levels - - if len(self.downsample_time_stride4_levels) > 0: - assert max(self.downsample_time_stride4_levels) < self.num_resolutions, ( - "The level to perform downsample 4 operation need to be smaller than the total resolution number %s" - % str(self.num_resolutions) - ) - - # downsampling - self.conv_in = torch.nn.Conv2d( - in_channels, self.ch, kernel_size=3, stride=1, padding=1 - ) - - curr_res = resolution - in_ch_mult = (1,) + tuple(ch_mult) - self.in_ch_mult = in_ch_mult - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch * in_ch_mult[i_level] - block_out = ch * ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append( - ResnetBlock( - in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout, - ) - ) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions - 1: - if i_level in self.downsample_time_stride4_levels: - down.downsample = DownsampleTimeStride4(block_in, resamp_with_conv) - else: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock( - in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout, - ) - self.mid.attn_1 = make_attn(block_in, attn_type=attn_type) - self.mid.block_2 = ResnetBlock( - in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout, - ) - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d( - block_in, - 2 * z_channels if double_z else z_channels, - kernel_size=3, - stride=1, - padding=1, - ) - - def forward(self, x): - # timestep embedding - temb = None - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions - 1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class Decoder(nn.Module): - def __init__( - self, - *, - ch, - out_ch, - ch_mult=(1, 2, 4, 8), - num_res_blocks, - attn_resolutions, - dropout=0.0, - resamp_with_conv=True, - in_channels, - resolution, - z_channels, - give_pre_end=False, - tanh_out=False, - use_linear_attn=False, - downsample_time_stride4_levels=[], - attn_type="vanilla", - **ignorekwargs, - ): - super().__init__() - if use_linear_attn: - attn_type = "linear" - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - self.give_pre_end = give_pre_end - self.tanh_out = tanh_out - self.downsample_time_stride4_levels = downsample_time_stride4_levels - - if len(self.downsample_time_stride4_levels) > 0: - assert max(self.downsample_time_stride4_levels) < self.num_resolutions, ( - "The level to perform downsample 4 operation need to be smaller than the total resolution number %s" - % str(self.num_resolutions) - ) - - # compute in_ch_mult, block_in and curr_res at lowest res - in_ch_mult = (1,) + tuple(ch_mult) - block_in = ch * ch_mult[self.num_resolutions - 1] - curr_res = resolution // 2 ** (self.num_resolutions - 1) - self.z_shape = (1, z_channels, curr_res, curr_res) - # print("Working with z of shape {} = {} dimensions.".format( - # self.z_shape, np.prod(self.z_shape))) - - # z to block_in - self.conv_in = torch.nn.Conv2d( - z_channels, block_in, kernel_size=3, stride=1, padding=1 - ) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock( - in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout, - ) - self.mid.attn_1 = make_attn(block_in, attn_type=attn_type) - self.mid.block_2 = ResnetBlock( - in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout, - ) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch * ch_mult[i_level] - for i_block in range(self.num_res_blocks + 1): - block.append( - ResnetBlock( - in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout, - ) - ) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - if i_level - 1 in self.downsample_time_stride4_levels: - up.upsample = UpsampleTimeStride4(block_in, resamp_with_conv) - else: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d( - block_in, out_ch, kernel_size=3, stride=1, padding=1 - ) - - def forward(self, z): - # assert z.shape[1:] == self.z_shape[1:] - self.last_z_shape = z.shape - - # timestep embedding - temb = None - - # z to block_in - h = self.conv_in(z) - - # middle - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks + 1): - h = self.up[i_level].block[i_block](h, temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - if self.give_pre_end: - return h - - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - if self.tanh_out: - h = torch.tanh(h) - return h - - -class SimpleDecoder(nn.Module): - def __init__(self, in_channels, out_channels, *args, **kwargs): - super().__init__() - self.model = nn.ModuleList( - [ - nn.Conv2d(in_channels, in_channels, 1), - ResnetBlock( - in_channels=in_channels, - out_channels=2 * in_channels, - temb_channels=0, - dropout=0.0, - ), - ResnetBlock( - in_channels=2 * in_channels, - out_channels=4 * in_channels, - temb_channels=0, - dropout=0.0, - ), - ResnetBlock( - in_channels=4 * in_channels, - out_channels=2 * in_channels, - temb_channels=0, - dropout=0.0, - ), - nn.Conv2d(2 * in_channels, in_channels, 1), - Upsample(in_channels, with_conv=True), - ] - ) - # end - self.norm_out = Normalize(in_channels) - self.conv_out = torch.nn.Conv2d( - in_channels, out_channels, kernel_size=3, stride=1, padding=1 - ) - - def forward(self, x): - for i, layer in enumerate(self.model): - if i in [1, 2, 3]: - x = layer(x, None) - else: - x = layer(x) - - h = self.norm_out(x) - h = nonlinearity(h) - x = self.conv_out(h) - return x - - -class UpsampleDecoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - ch, - num_res_blocks, - resolution, - ch_mult=(2, 2), - dropout=0.0, - ): - super().__init__() - # upsampling - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - block_in = in_channels - curr_res = resolution // 2 ** (self.num_resolutions - 1) - self.res_blocks = nn.ModuleList() - self.upsample_blocks = nn.ModuleList() - for i_level in range(self.num_resolutions): - res_block = [] - block_out = ch * ch_mult[i_level] - for i_block in range(self.num_res_blocks + 1): - res_block.append( - ResnetBlock( - in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout, - ) - ) - block_in = block_out - self.res_blocks.append(nn.ModuleList(res_block)) - if i_level != self.num_resolutions - 1: - self.upsample_blocks.append(Upsample(block_in, True)) - curr_res = curr_res * 2 - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d( - block_in, out_channels, kernel_size=3, stride=1, padding=1 - ) - - def forward(self, x): - # upsampling - h = x - for k, i_level in enumerate(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks + 1): - h = self.res_blocks[i_level][i_block](h, None) - if i_level != self.num_resolutions - 1: - h = self.upsample_blocks[k](h) - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class LatentRescaler(nn.Module): - def __init__(self, factor, in_channels, mid_channels, out_channels, depth=2): - super().__init__() - # residual block, interpolate, residual block - self.factor = factor - self.conv_in = nn.Conv2d( - in_channels, mid_channels, kernel_size=3, stride=1, padding=1 - ) - self.res_block1 = nn.ModuleList( - [ - ResnetBlock( - in_channels=mid_channels, - out_channels=mid_channels, - temb_channels=0, - dropout=0.0, - ) - for _ in range(depth) - ] - ) - self.attn = AttnBlock(mid_channels) - self.res_block2 = nn.ModuleList( - [ - ResnetBlock( - in_channels=mid_channels, - out_channels=mid_channels, - temb_channels=0, - dropout=0.0, - ) - for _ in range(depth) - ] - ) - - self.conv_out = nn.Conv2d( - mid_channels, - out_channels, - kernel_size=1, - ) - - def forward(self, x): - x = self.conv_in(x) - for block in self.res_block1: - x = block(x, None) - x = torch.nn.functional.interpolate( - x, - size=( - int(round(x.shape[2] * self.factor)), - int(round(x.shape[3] * self.factor)), - ), - ) - x = self.attn(x).contiguous() - for block in self.res_block2: - x = block(x, None) - x = self.conv_out(x) - return x - - -class MergedRescaleEncoder(nn.Module): - def __init__( - self, - in_channels, - ch, - resolution, - out_ch, - num_res_blocks, - attn_resolutions, - dropout=0.0, - resamp_with_conv=True, - ch_mult=(1, 2, 4, 8), - rescale_factor=1.0, - rescale_module_depth=1, - ): - super().__init__() - intermediate_chn = ch * ch_mult[-1] - self.encoder = Encoder( - in_channels=in_channels, - num_res_blocks=num_res_blocks, - ch=ch, - ch_mult=ch_mult, - z_channels=intermediate_chn, - double_z=False, - resolution=resolution, - attn_resolutions=attn_resolutions, - dropout=dropout, - resamp_with_conv=resamp_with_conv, - out_ch=None, - ) - self.rescaler = LatentRescaler( - factor=rescale_factor, - in_channels=intermediate_chn, - mid_channels=intermediate_chn, - out_channels=out_ch, - depth=rescale_module_depth, - ) - - def forward(self, x): - x = self.encoder(x) - x = self.rescaler(x) - return x - - -class MergedRescaleDecoder(nn.Module): - def __init__( - self, - z_channels, - out_ch, - resolution, - num_res_blocks, - attn_resolutions, - ch, - ch_mult=(1, 2, 4, 8), - dropout=0.0, - resamp_with_conv=True, - rescale_factor=1.0, - rescale_module_depth=1, - ): - super().__init__() - tmp_chn = z_channels * ch_mult[-1] - self.decoder = Decoder( - out_ch=out_ch, - z_channels=tmp_chn, - attn_resolutions=attn_resolutions, - dropout=dropout, - resamp_with_conv=resamp_with_conv, - in_channels=None, - num_res_blocks=num_res_blocks, - ch_mult=ch_mult, - resolution=resolution, - ch=ch, - ) - self.rescaler = LatentRescaler( - factor=rescale_factor, - in_channels=z_channels, - mid_channels=tmp_chn, - out_channels=tmp_chn, - depth=rescale_module_depth, - ) - - def forward(self, x): - x = self.rescaler(x) - x = self.decoder(x) - return x - - -class Upsampler(nn.Module): - def __init__(self, in_size, out_size, in_channels, out_channels, ch_mult=2): - super().__init__() - assert out_size >= in_size - num_blocks = int(np.log2(out_size // in_size)) + 1 - factor_up = 1.0 + (out_size % in_size) - print( - f"Building {self.__class__.__name__} with in_size: {in_size} --> out_size {out_size} and factor {factor_up}" - ) - self.rescaler = LatentRescaler( - factor=factor_up, - in_channels=in_channels, - mid_channels=2 * in_channels, - out_channels=in_channels, - ) - self.decoder = Decoder( - out_ch=out_channels, - resolution=out_size, - z_channels=in_channels, - num_res_blocks=2, - attn_resolutions=[], - in_channels=None, - ch=in_channels, - ch_mult=[ch_mult for _ in range(num_blocks)], - ) - - def forward(self, x): - x = self.rescaler(x) - x = self.decoder(x) - return x - - -class Resize(nn.Module): - def __init__(self, in_channels=None, learned=False, mode="bilinear"): - super().__init__() - self.with_conv = learned - self.mode = mode - if self.with_conv: - print( - f"Note: {self.__class__.__name} uses learned downsampling and will ignore the fixed {mode} mode" - ) - raise NotImplementedError() - assert in_channels is not None - # no asymmetric padding in torch conv, must do it ourselves - self.conv = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=4, stride=2, padding=1 - ) - - def forward(self, x, scale_factor=1.0): - if scale_factor == 1.0: - return x - else: - x = torch.nn.functional.interpolate( - x, mode=self.mode, align_corners=False, scale_factor=scale_factor - ) - return x - - -class FirstStagePostProcessor(nn.Module): - def __init__( - self, - ch_mult: list, - in_channels, - pretrained_model: nn.Module = None, - reshape=False, - n_channels=None, - dropout=0.0, - pretrained_config=None, - ): - super().__init__() - if pretrained_config is None: - assert ( - pretrained_model is not None - ), 'Either "pretrained_model" or "pretrained_config" must not be None' - self.pretrained_model = pretrained_model - else: - assert ( - pretrained_config is not None - ), 'Either "pretrained_model" or "pretrained_config" must not be None' - self.instantiate_pretrained(pretrained_config) - - self.do_reshape = reshape - - if n_channels is None: - n_channels = self.pretrained_model.encoder.ch - - self.proj_norm = Normalize(in_channels, num_groups=in_channels // 2) - self.proj = nn.Conv2d( - in_channels, n_channels, kernel_size=3, stride=1, padding=1 - ) - - blocks = [] - downs = [] - ch_in = n_channels - for m in ch_mult: - blocks.append( - ResnetBlock( - in_channels=ch_in, out_channels=m * n_channels, dropout=dropout - ) - ) - ch_in = m * n_channels - downs.append(Downsample(ch_in, with_conv=False)) - - self.model = nn.ModuleList(blocks) - self.downsampler = nn.ModuleList(downs) - - def instantiate_pretrained(self, config): - model = instantiate_from_config(config) - self.pretrained_model = model.eval() - # self.pretrained_model.train = False - for param in self.pretrained_model.parameters(): - param.requires_grad = False - - @torch.no_grad() - def encode_with_pretrained(self, x): - c = self.pretrained_model.encode(x) - if isinstance(c, DiagonalGaussianDistribution): - c = c.mode() - return c - - def forward(self, x): - z_fs = self.encode_with_pretrained(x) - z = self.proj_norm(z_fs) - z = self.proj(z) - z = nonlinearity(z) - - for submodel, downmodel in zip(self.model, self.downsampler): - z = submodel(z, temb=None) - z = downmodel(z) - - if self.do_reshape: - z = rearrange(z, "b c h w -> b (h w) c") - return z diff --git a/spaces/camenduru-com/wav2lip/README.md b/spaces/camenduru-com/wav2lip/README.md deleted file mode 100644 index 48a882e2b5af0d3cf86797c29d25bf6085d98a7a..0000000000000000000000000000000000000000 --- a/spaces/camenduru-com/wav2lip/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Wav2lip -emoji: 🗣 -colorFrom: indigo -colorTo: indigo -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/TiffImagePlugin.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/TiffImagePlugin.py deleted file mode 100644 index d5148828506b36c72bac626b2032ebf129a62678..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/TiffImagePlugin.py +++ /dev/null @@ -1,2163 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# TIFF file handling -# -# TIFF is a flexible, if somewhat aged, image file format originally -# defined by Aldus. Although TIFF supports a wide variety of pixel -# layouts and compression methods, the name doesn't really stand for -# "thousands of incompatible file formats," it just feels that way. -# -# To read TIFF data from a stream, the stream must be seekable. For -# progressive decoding, make sure to use TIFF files where the tag -# directory is placed first in the file. -# -# History: -# 1995-09-01 fl Created -# 1996-05-04 fl Handle JPEGTABLES tag -# 1996-05-18 fl Fixed COLORMAP support -# 1997-01-05 fl Fixed PREDICTOR support -# 1997-08-27 fl Added support for rational tags (from Perry Stoll) -# 1998-01-10 fl Fixed seek/tell (from Jan Blom) -# 1998-07-15 fl Use private names for internal variables -# 1999-06-13 fl Rewritten for PIL 1.0 (1.0) -# 2000-10-11 fl Additional fixes for Python 2.0 (1.1) -# 2001-04-17 fl Fixed rewind support (seek to frame 0) (1.2) -# 2001-05-12 fl Added write support for more tags (from Greg Couch) (1.3) -# 2001-12-18 fl Added workaround for broken Matrox library -# 2002-01-18 fl Don't mess up if photometric tag is missing (D. Alan Stewart) -# 2003-05-19 fl Check FILLORDER tag -# 2003-09-26 fl Added RGBa support -# 2004-02-24 fl Added DPI support; fixed rational write support -# 2005-02-07 fl Added workaround for broken Corel Draw 10 files -# 2006-01-09 fl Added support for float/double tags (from Russell Nelson) -# -# Copyright (c) 1997-2006 by Secret Labs AB. All rights reserved. -# Copyright (c) 1995-1997 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -import io -import itertools -import logging -import math -import os -import struct -import warnings -from collections.abc import MutableMapping -from fractions import Fraction -from numbers import Number, Rational - -from . import ExifTags, Image, ImageFile, ImageOps, ImagePalette, TiffTags -from ._binary import i16be as i16 -from ._binary import i32be as i32 -from ._binary import o8 -from .TiffTags import TYPES - -logger = logging.getLogger(__name__) - -# Set these to true to force use of libtiff for reading or writing. -READ_LIBTIFF = False -WRITE_LIBTIFF = False -IFD_LEGACY_API = True -STRIP_SIZE = 65536 - -II = b"II" # little-endian (Intel style) -MM = b"MM" # big-endian (Motorola style) - -# -# -------------------------------------------------------------------- -# Read TIFF files - -# a few tag names, just to make the code below a bit more readable -IMAGEWIDTH = 256 -IMAGELENGTH = 257 -BITSPERSAMPLE = 258 -COMPRESSION = 259 -PHOTOMETRIC_INTERPRETATION = 262 -FILLORDER = 266 -IMAGEDESCRIPTION = 270 -STRIPOFFSETS = 273 -SAMPLESPERPIXEL = 277 -ROWSPERSTRIP = 278 -STRIPBYTECOUNTS = 279 -X_RESOLUTION = 282 -Y_RESOLUTION = 283 -PLANAR_CONFIGURATION = 284 -RESOLUTION_UNIT = 296 -TRANSFERFUNCTION = 301 -SOFTWARE = 305 -DATE_TIME = 306 -ARTIST = 315 -PREDICTOR = 317 -COLORMAP = 320 -TILEWIDTH = 322 -TILELENGTH = 323 -TILEOFFSETS = 324 -TILEBYTECOUNTS = 325 -SUBIFD = 330 -EXTRASAMPLES = 338 -SAMPLEFORMAT = 339 -JPEGTABLES = 347 -YCBCRSUBSAMPLING = 530 -REFERENCEBLACKWHITE = 532 -COPYRIGHT = 33432 -IPTC_NAA_CHUNK = 33723 # newsphoto properties -PHOTOSHOP_CHUNK = 34377 # photoshop properties -ICCPROFILE = 34675 -EXIFIFD = 34665 -XMP = 700 -JPEGQUALITY = 65537 # pseudo-tag by libtiff - -# https://github.com/imagej/ImageJA/blob/master/src/main/java/ij/io/TiffDecoder.java -IMAGEJ_META_DATA_BYTE_COUNTS = 50838 -IMAGEJ_META_DATA = 50839 - -COMPRESSION_INFO = { - # Compression => pil compression name - 1: "raw", - 2: "tiff_ccitt", - 3: "group3", - 4: "group4", - 5: "tiff_lzw", - 6: "tiff_jpeg", # obsolete - 7: "jpeg", - 8: "tiff_adobe_deflate", - 32771: "tiff_raw_16", # 16-bit padding - 32773: "packbits", - 32809: "tiff_thunderscan", - 32946: "tiff_deflate", - 34676: "tiff_sgilog", - 34677: "tiff_sgilog24", - 34925: "lzma", - 50000: "zstd", - 50001: "webp", -} - -COMPRESSION_INFO_REV = {v: k for k, v in COMPRESSION_INFO.items()} - -OPEN_INFO = { - # (ByteOrder, PhotoInterpretation, SampleFormat, FillOrder, BitsPerSample, - # ExtraSamples) => mode, rawmode - (II, 0, (1,), 1, (1,), ()): ("1", "1;I"), - (MM, 0, (1,), 1, (1,), ()): ("1", "1;I"), - (II, 0, (1,), 2, (1,), ()): ("1", "1;IR"), - (MM, 0, (1,), 2, (1,), ()): ("1", "1;IR"), - (II, 1, (1,), 1, (1,), ()): ("1", "1"), - (MM, 1, (1,), 1, (1,), ()): ("1", "1"), - (II, 1, (1,), 2, (1,), ()): ("1", "1;R"), - (MM, 1, (1,), 2, (1,), ()): ("1", "1;R"), - (II, 0, (1,), 1, (2,), ()): ("L", "L;2I"), - (MM, 0, (1,), 1, (2,), ()): ("L", "L;2I"), - (II, 0, (1,), 2, (2,), ()): ("L", "L;2IR"), - (MM, 0, (1,), 2, (2,), ()): ("L", "L;2IR"), - (II, 1, (1,), 1, (2,), ()): ("L", "L;2"), - (MM, 1, (1,), 1, (2,), ()): ("L", "L;2"), - (II, 1, (1,), 2, (2,), ()): ("L", "L;2R"), - (MM, 1, (1,), 2, (2,), ()): ("L", "L;2R"), - (II, 0, (1,), 1, (4,), ()): ("L", "L;4I"), - (MM, 0, (1,), 1, (4,), ()): ("L", "L;4I"), - (II, 0, (1,), 2, (4,), ()): ("L", "L;4IR"), - (MM, 0, (1,), 2, (4,), ()): ("L", "L;4IR"), - (II, 1, (1,), 1, (4,), ()): ("L", "L;4"), - (MM, 1, (1,), 1, (4,), ()): ("L", "L;4"), - (II, 1, (1,), 2, (4,), ()): ("L", "L;4R"), - (MM, 1, (1,), 2, (4,), ()): ("L", "L;4R"), - (II, 0, (1,), 1, (8,), ()): ("L", "L;I"), - (MM, 0, (1,), 1, (8,), ()): ("L", "L;I"), - (II, 0, (1,), 2, (8,), ()): ("L", "L;IR"), - (MM, 0, (1,), 2, (8,), ()): ("L", "L;IR"), - (II, 1, (1,), 1, (8,), ()): ("L", "L"), - (MM, 1, (1,), 1, (8,), ()): ("L", "L"), - (II, 1, (2,), 1, (8,), ()): ("L", "L"), - (MM, 1, (2,), 1, (8,), ()): ("L", "L"), - (II, 1, (1,), 2, (8,), ()): ("L", "L;R"), - (MM, 1, (1,), 2, (8,), ()): ("L", "L;R"), - (II, 1, (1,), 1, (12,), ()): ("I;16", "I;12"), - (II, 0, (1,), 1, (16,), ()): ("I;16", "I;16"), - (II, 1, (1,), 1, (16,), ()): ("I;16", "I;16"), - (MM, 1, (1,), 1, (16,), ()): ("I;16B", "I;16B"), - (II, 1, (1,), 2, (16,), ()): ("I;16", "I;16R"), - (II, 1, (2,), 1, (16,), ()): ("I", "I;16S"), - (MM, 1, (2,), 1, (16,), ()): ("I", "I;16BS"), - (II, 0, (3,), 1, (32,), ()): ("F", "F;32F"), - (MM, 0, (3,), 1, (32,), ()): ("F", "F;32BF"), - (II, 1, (1,), 1, (32,), ()): ("I", "I;32N"), - (II, 1, (2,), 1, (32,), ()): ("I", "I;32S"), - (MM, 1, (2,), 1, (32,), ()): ("I", "I;32BS"), - (II, 1, (3,), 1, (32,), ()): ("F", "F;32F"), - (MM, 1, (3,), 1, (32,), ()): ("F", "F;32BF"), - (II, 1, (1,), 1, (8, 8), (2,)): ("LA", "LA"), - (MM, 1, (1,), 1, (8, 8), (2,)): ("LA", "LA"), - (II, 2, (1,), 1, (8, 8, 8), ()): ("RGB", "RGB"), - (MM, 2, (1,), 1, (8, 8, 8), ()): ("RGB", "RGB"), - (II, 2, (1,), 2, (8, 8, 8), ()): ("RGB", "RGB;R"), - (MM, 2, (1,), 2, (8, 8, 8), ()): ("RGB", "RGB;R"), - (II, 2, (1,), 1, (8, 8, 8, 8), ()): ("RGBA", "RGBA"), # missing ExtraSamples - (MM, 2, (1,), 1, (8, 8, 8, 8), ()): ("RGBA", "RGBA"), # missing ExtraSamples - (II, 2, (1,), 1, (8, 8, 8, 8), (0,)): ("RGBX", "RGBX"), - (MM, 2, (1,), 1, (8, 8, 8, 8), (0,)): ("RGBX", "RGBX"), - (II, 2, (1,), 1, (8, 8, 8, 8, 8), (0, 0)): ("RGBX", "RGBXX"), - (MM, 2, (1,), 1, (8, 8, 8, 8, 8), (0, 0)): ("RGBX", "RGBXX"), - (II, 2, (1,), 1, (8, 8, 8, 8, 8, 8), (0, 0, 0)): ("RGBX", "RGBXXX"), - (MM, 2, (1,), 1, (8, 8, 8, 8, 8, 8), (0, 0, 0)): ("RGBX", "RGBXXX"), - (II, 2, (1,), 1, (8, 8, 8, 8), (1,)): ("RGBA", "RGBa"), - (MM, 2, (1,), 1, (8, 8, 8, 8), (1,)): ("RGBA", "RGBa"), - (II, 2, (1,), 1, (8, 8, 8, 8, 8), (1, 0)): ("RGBA", "RGBaX"), - (MM, 2, (1,), 1, (8, 8, 8, 8, 8), (1, 0)): ("RGBA", "RGBaX"), - (II, 2, (1,), 1, (8, 8, 8, 8, 8, 8), (1, 0, 0)): ("RGBA", "RGBaXX"), - (MM, 2, (1,), 1, (8, 8, 8, 8, 8, 8), (1, 0, 0)): ("RGBA", "RGBaXX"), - (II, 2, (1,), 1, (8, 8, 8, 8), (2,)): ("RGBA", "RGBA"), - (MM, 2, (1,), 1, (8, 8, 8, 8), (2,)): ("RGBA", "RGBA"), - (II, 2, (1,), 1, (8, 8, 8, 8, 8), (2, 0)): ("RGBA", "RGBAX"), - (MM, 2, (1,), 1, (8, 8, 8, 8, 8), (2, 0)): ("RGBA", "RGBAX"), - (II, 2, (1,), 1, (8, 8, 8, 8, 8, 8), (2, 0, 0)): ("RGBA", "RGBAXX"), - (MM, 2, (1,), 1, (8, 8, 8, 8, 8, 8), (2, 0, 0)): ("RGBA", "RGBAXX"), - (II, 2, (1,), 1, (8, 8, 8, 8), (999,)): ("RGBA", "RGBA"), # Corel Draw 10 - (MM, 2, (1,), 1, (8, 8, 8, 8), (999,)): ("RGBA", "RGBA"), # Corel Draw 10 - (II, 2, (1,), 1, (16, 16, 16), ()): ("RGB", "RGB;16L"), - (MM, 2, (1,), 1, (16, 16, 16), ()): ("RGB", "RGB;16B"), - (II, 2, (1,), 1, (16, 16, 16, 16), ()): ("RGBA", "RGBA;16L"), - (MM, 2, (1,), 1, (16, 16, 16, 16), ()): ("RGBA", "RGBA;16B"), - (II, 2, (1,), 1, (16, 16, 16, 16), (0,)): ("RGBX", "RGBX;16L"), - (MM, 2, (1,), 1, (16, 16, 16, 16), (0,)): ("RGBX", "RGBX;16B"), - (II, 2, (1,), 1, (16, 16, 16, 16), (1,)): ("RGBA", "RGBa;16L"), - (MM, 2, (1,), 1, (16, 16, 16, 16), (1,)): ("RGBA", "RGBa;16B"), - (II, 2, (1,), 1, (16, 16, 16, 16), (2,)): ("RGBA", "RGBA;16L"), - (MM, 2, (1,), 1, (16, 16, 16, 16), (2,)): ("RGBA", "RGBA;16B"), - (II, 3, (1,), 1, (1,), ()): ("P", "P;1"), - (MM, 3, (1,), 1, (1,), ()): ("P", "P;1"), - (II, 3, (1,), 2, (1,), ()): ("P", "P;1R"), - (MM, 3, (1,), 2, (1,), ()): ("P", "P;1R"), - (II, 3, (1,), 1, (2,), ()): ("P", "P;2"), - (MM, 3, (1,), 1, (2,), ()): ("P", "P;2"), - (II, 3, (1,), 2, (2,), ()): ("P", "P;2R"), - (MM, 3, (1,), 2, (2,), ()): ("P", "P;2R"), - (II, 3, (1,), 1, (4,), ()): ("P", "P;4"), - (MM, 3, (1,), 1, (4,), ()): ("P", "P;4"), - (II, 3, (1,), 2, (4,), ()): ("P", "P;4R"), - (MM, 3, (1,), 2, (4,), ()): ("P", "P;4R"), - (II, 3, (1,), 1, (8,), ()): ("P", "P"), - (MM, 3, (1,), 1, (8,), ()): ("P", "P"), - (II, 3, (1,), 1, (8, 8), (2,)): ("PA", "PA"), - (MM, 3, (1,), 1, (8, 8), (2,)): ("PA", "PA"), - (II, 3, (1,), 2, (8,), ()): ("P", "P;R"), - (MM, 3, (1,), 2, (8,), ()): ("P", "P;R"), - (II, 5, (1,), 1, (8, 8, 8, 8), ()): ("CMYK", "CMYK"), - (MM, 5, (1,), 1, (8, 8, 8, 8), ()): ("CMYK", "CMYK"), - (II, 5, (1,), 1, (8, 8, 8, 8, 8), (0,)): ("CMYK", "CMYKX"), - (MM, 5, (1,), 1, (8, 8, 8, 8, 8), (0,)): ("CMYK", "CMYKX"), - (II, 5, (1,), 1, (8, 8, 8, 8, 8, 8), (0, 0)): ("CMYK", "CMYKXX"), - (MM, 5, (1,), 1, (8, 8, 8, 8, 8, 8), (0, 0)): ("CMYK", "CMYKXX"), - (II, 5, (1,), 1, (16, 16, 16, 16), ()): ("CMYK", "CMYK;16L"), - # JPEG compressed images handled by LibTiff and auto-converted to RGBX - # Minimal Baseline TIFF requires YCbCr images to have 3 SamplesPerPixel - (II, 6, (1,), 1, (8, 8, 8), ()): ("RGB", "RGBX"), - (MM, 6, (1,), 1, (8, 8, 8), ()): ("RGB", "RGBX"), - (II, 8, (1,), 1, (8, 8, 8), ()): ("LAB", "LAB"), - (MM, 8, (1,), 1, (8, 8, 8), ()): ("LAB", "LAB"), -} - -MAX_SAMPLESPERPIXEL = max(len(key_tp[4]) for key_tp in OPEN_INFO) - -PREFIXES = [ - b"MM\x00\x2A", # Valid TIFF header with big-endian byte order - b"II\x2A\x00", # Valid TIFF header with little-endian byte order - b"MM\x2A\x00", # Invalid TIFF header, assume big-endian - b"II\x00\x2A", # Invalid TIFF header, assume little-endian - b"MM\x00\x2B", # BigTIFF with big-endian byte order - b"II\x2B\x00", # BigTIFF with little-endian byte order -] - - -def _accept(prefix): - return prefix[:4] in PREFIXES - - -def _limit_rational(val, max_val): - inv = abs(val) > 1 - n_d = IFDRational(1 / val if inv else val).limit_rational(max_val) - return n_d[::-1] if inv else n_d - - -def _limit_signed_rational(val, max_val, min_val): - frac = Fraction(val) - n_d = frac.numerator, frac.denominator - - if min(n_d) < min_val: - n_d = _limit_rational(val, abs(min_val)) - - if max(n_d) > max_val: - val = Fraction(*n_d) - n_d = _limit_rational(val, max_val) - - return n_d - - -## -# Wrapper for TIFF IFDs. - -_load_dispatch = {} -_write_dispatch = {} - - -class IFDRational(Rational): - """Implements a rational class where 0/0 is a legal value to match - the in the wild use of exif rationals. - - e.g., DigitalZoomRatio - 0.00/0.00 indicates that no digital zoom was used - """ - - """ If the denominator is 0, store this as a float('nan'), otherwise store - as a fractions.Fraction(). Delegate as appropriate - - """ - - __slots__ = ("_numerator", "_denominator", "_val") - - def __init__(self, value, denominator=1): - """ - :param value: either an integer numerator, a - float/rational/other number, or an IFDRational - :param denominator: Optional integer denominator - """ - if isinstance(value, IFDRational): - self._numerator = value.numerator - self._denominator = value.denominator - self._val = value._val - return - - if isinstance(value, Fraction): - self._numerator = value.numerator - self._denominator = value.denominator - else: - self._numerator = value - self._denominator = denominator - - if denominator == 0: - self._val = float("nan") - elif denominator == 1: - self._val = Fraction(value) - else: - self._val = Fraction(value, denominator) - - @property - def numerator(self): - return self._numerator - - @property - def denominator(self): - return self._denominator - - def limit_rational(self, max_denominator): - """ - - :param max_denominator: Integer, the maximum denominator value - :returns: Tuple of (numerator, denominator) - """ - - if self.denominator == 0: - return self.numerator, self.denominator - - f = self._val.limit_denominator(max_denominator) - return f.numerator, f.denominator - - def __repr__(self): - return str(float(self._val)) - - def __hash__(self): - return self._val.__hash__() - - def __eq__(self, other): - val = self._val - if isinstance(other, IFDRational): - other = other._val - if isinstance(other, float): - val = float(val) - return val == other - - def __getstate__(self): - return [self._val, self._numerator, self._denominator] - - def __setstate__(self, state): - IFDRational.__init__(self, 0) - _val, _numerator, _denominator = state - self._val = _val - self._numerator = _numerator - self._denominator = _denominator - - def _delegate(op): - def delegate(self, *args): - return getattr(self._val, op)(*args) - - return delegate - - """ a = ['add','radd', 'sub', 'rsub', 'mul', 'rmul', - 'truediv', 'rtruediv', 'floordiv', 'rfloordiv', - 'mod','rmod', 'pow','rpow', 'pos', 'neg', - 'abs', 'trunc', 'lt', 'gt', 'le', 'ge', 'bool', - 'ceil', 'floor', 'round'] - print("\n".join("__%s__ = _delegate('__%s__')" % (s,s) for s in a)) - """ - - __add__ = _delegate("__add__") - __radd__ = _delegate("__radd__") - __sub__ = _delegate("__sub__") - __rsub__ = _delegate("__rsub__") - __mul__ = _delegate("__mul__") - __rmul__ = _delegate("__rmul__") - __truediv__ = _delegate("__truediv__") - __rtruediv__ = _delegate("__rtruediv__") - __floordiv__ = _delegate("__floordiv__") - __rfloordiv__ = _delegate("__rfloordiv__") - __mod__ = _delegate("__mod__") - __rmod__ = _delegate("__rmod__") - __pow__ = _delegate("__pow__") - __rpow__ = _delegate("__rpow__") - __pos__ = _delegate("__pos__") - __neg__ = _delegate("__neg__") - __abs__ = _delegate("__abs__") - __trunc__ = _delegate("__trunc__") - __lt__ = _delegate("__lt__") - __gt__ = _delegate("__gt__") - __le__ = _delegate("__le__") - __ge__ = _delegate("__ge__") - __bool__ = _delegate("__bool__") - __ceil__ = _delegate("__ceil__") - __floor__ = _delegate("__floor__") - __round__ = _delegate("__round__") - # Python >= 3.11 - if hasattr(Fraction, "__int__"): - __int__ = _delegate("__int__") - - -class ImageFileDirectory_v2(MutableMapping): - """This class represents a TIFF tag directory. To speed things up, we - don't decode tags unless they're asked for. - - Exposes a dictionary interface of the tags in the directory:: - - ifd = ImageFileDirectory_v2() - ifd[key] = 'Some Data' - ifd.tagtype[key] = TiffTags.ASCII - print(ifd[key]) - 'Some Data' - - Individual values are returned as the strings or numbers, sequences are - returned as tuples of the values. - - The tiff metadata type of each item is stored in a dictionary of - tag types in - :attr:`~PIL.TiffImagePlugin.ImageFileDirectory_v2.tagtype`. The types - are read from a tiff file, guessed from the type added, or added - manually. - - Data Structures: - - * ``self.tagtype = {}`` - - * Key: numerical TIFF tag number - * Value: integer corresponding to the data type from - :py:data:`.TiffTags.TYPES` - - .. versionadded:: 3.0.0 - - 'Internal' data structures: - - * ``self._tags_v2 = {}`` - - * Key: numerical TIFF tag number - * Value: decoded data, as tuple for multiple values - - * ``self._tagdata = {}`` - - * Key: numerical TIFF tag number - * Value: undecoded byte string from file - - * ``self._tags_v1 = {}`` - - * Key: numerical TIFF tag number - * Value: decoded data in the v1 format - - Tags will be found in the private attributes ``self._tagdata``, and in - ``self._tags_v2`` once decoded. - - ``self.legacy_api`` is a value for internal use, and shouldn't be changed - from outside code. In cooperation with - :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v1`, if ``legacy_api`` - is true, then decoded tags will be populated into both ``_tags_v1`` and - ``_tags_v2``. ``_tags_v2`` will be used if this IFD is used in the TIFF - save routine. Tags should be read from ``_tags_v1`` if - ``legacy_api == true``. - - """ - - def __init__(self, ifh=b"II\052\0\0\0\0\0", prefix=None, group=None): - """Initialize an ImageFileDirectory. - - To construct an ImageFileDirectory from a real file, pass the 8-byte - magic header to the constructor. To only set the endianness, pass it - as the 'prefix' keyword argument. - - :param ifh: One of the accepted magic headers (cf. PREFIXES); also sets - endianness. - :param prefix: Override the endianness of the file. - """ - if not _accept(ifh): - msg = f"not a TIFF file (header {repr(ifh)} not valid)" - raise SyntaxError(msg) - self._prefix = prefix if prefix is not None else ifh[:2] - if self._prefix == MM: - self._endian = ">" - elif self._prefix == II: - self._endian = "<" - else: - msg = "not a TIFF IFD" - raise SyntaxError(msg) - self._bigtiff = ifh[2] == 43 - self.group = group - self.tagtype = {} - """ Dictionary of tag types """ - self.reset() - (self.next,) = ( - self._unpack("Q", ifh[8:]) if self._bigtiff else self._unpack("L", ifh[4:]) - ) - self._legacy_api = False - - prefix = property(lambda self: self._prefix) - offset = property(lambda self: self._offset) - legacy_api = property(lambda self: self._legacy_api) - - @legacy_api.setter - def legacy_api(self, value): - msg = "Not allowing setting of legacy api" - raise Exception(msg) - - def reset(self): - self._tags_v1 = {} # will remain empty if legacy_api is false - self._tags_v2 = {} # main tag storage - self._tagdata = {} - self.tagtype = {} # added 2008-06-05 by Florian Hoech - self._next = None - self._offset = None - - def __str__(self): - return str(dict(self)) - - def named(self): - """ - :returns: dict of name|key: value - - Returns the complete tag dictionary, with named tags where possible. - """ - return { - TiffTags.lookup(code, self.group).name: value - for code, value in self.items() - } - - def __len__(self): - return len(set(self._tagdata) | set(self._tags_v2)) - - def __getitem__(self, tag): - if tag not in self._tags_v2: # unpack on the fly - data = self._tagdata[tag] - typ = self.tagtype[tag] - size, handler = self._load_dispatch[typ] - self[tag] = handler(self, data, self.legacy_api) # check type - val = self._tags_v2[tag] - if self.legacy_api and not isinstance(val, (tuple, bytes)): - val = (val,) - return val - - def __contains__(self, tag): - return tag in self._tags_v2 or tag in self._tagdata - - def __setitem__(self, tag, value): - self._setitem(tag, value, self.legacy_api) - - def _setitem(self, tag, value, legacy_api): - basetypes = (Number, bytes, str) - - info = TiffTags.lookup(tag, self.group) - values = [value] if isinstance(value, basetypes) else value - - if tag not in self.tagtype: - if info.type: - self.tagtype[tag] = info.type - else: - self.tagtype[tag] = TiffTags.UNDEFINED - if all(isinstance(v, IFDRational) for v in values): - self.tagtype[tag] = ( - TiffTags.RATIONAL - if all(v >= 0 for v in values) - else TiffTags.SIGNED_RATIONAL - ) - elif all(isinstance(v, int) for v in values): - if all(0 <= v < 2**16 for v in values): - self.tagtype[tag] = TiffTags.SHORT - elif all(-(2**15) < v < 2**15 for v in values): - self.tagtype[tag] = TiffTags.SIGNED_SHORT - else: - self.tagtype[tag] = ( - TiffTags.LONG - if all(v >= 0 for v in values) - else TiffTags.SIGNED_LONG - ) - elif all(isinstance(v, float) for v in values): - self.tagtype[tag] = TiffTags.DOUBLE - elif all(isinstance(v, str) for v in values): - self.tagtype[tag] = TiffTags.ASCII - elif all(isinstance(v, bytes) for v in values): - self.tagtype[tag] = TiffTags.BYTE - - if self.tagtype[tag] == TiffTags.UNDEFINED: - values = [ - v.encode("ascii", "replace") if isinstance(v, str) else v - for v in values - ] - elif self.tagtype[tag] == TiffTags.RATIONAL: - values = [float(v) if isinstance(v, int) else v for v in values] - - is_ifd = self.tagtype[tag] == TiffTags.LONG and isinstance(values, dict) - if not is_ifd: - values = tuple(info.cvt_enum(value) for value in values) - - dest = self._tags_v1 if legacy_api else self._tags_v2 - - # Three branches: - # Spec'd length == 1, Actual length 1, store as element - # Spec'd length == 1, Actual > 1, Warn and truncate. Formerly barfed. - # No Spec, Actual length 1, Formerly (<4.2) returned a 1 element tuple. - # Don't mess with the legacy api, since it's frozen. - if not is_ifd and ( - (info.length == 1) - or self.tagtype[tag] == TiffTags.BYTE - or (info.length is None and len(values) == 1 and not legacy_api) - ): - # Don't mess with the legacy api, since it's frozen. - if legacy_api and self.tagtype[tag] in [ - TiffTags.RATIONAL, - TiffTags.SIGNED_RATIONAL, - ]: # rationals - values = (values,) - try: - (dest[tag],) = values - except ValueError: - # We've got a builtin tag with 1 expected entry - warnings.warn( - f"Metadata Warning, tag {tag} had too many entries: " - f"{len(values)}, expected 1" - ) - dest[tag] = values[0] - - else: - # Spec'd length > 1 or undefined - # Unspec'd, and length > 1 - dest[tag] = values - - def __delitem__(self, tag): - self._tags_v2.pop(tag, None) - self._tags_v1.pop(tag, None) - self._tagdata.pop(tag, None) - - def __iter__(self): - return iter(set(self._tagdata) | set(self._tags_v2)) - - def _unpack(self, fmt, data): - return struct.unpack(self._endian + fmt, data) - - def _pack(self, fmt, *values): - return struct.pack(self._endian + fmt, *values) - - def _register_loader(idx, size): - def decorator(func): - from .TiffTags import TYPES - - if func.__name__.startswith("load_"): - TYPES[idx] = func.__name__[5:].replace("_", " ") - _load_dispatch[idx] = size, func # noqa: F821 - return func - - return decorator - - def _register_writer(idx): - def decorator(func): - _write_dispatch[idx] = func # noqa: F821 - return func - - return decorator - - def _register_basic(idx_fmt_name): - from .TiffTags import TYPES - - idx, fmt, name = idx_fmt_name - TYPES[idx] = name - size = struct.calcsize("=" + fmt) - _load_dispatch[idx] = ( # noqa: F821 - size, - lambda self, data, legacy_api=True: ( - self._unpack(f"{len(data) // size}{fmt}", data) - ), - ) - _write_dispatch[idx] = lambda self, *values: ( # noqa: F821 - b"".join(self._pack(fmt, value) for value in values) - ) - - list( - map( - _register_basic, - [ - (TiffTags.SHORT, "H", "short"), - (TiffTags.LONG, "L", "long"), - (TiffTags.SIGNED_BYTE, "b", "signed byte"), - (TiffTags.SIGNED_SHORT, "h", "signed short"), - (TiffTags.SIGNED_LONG, "l", "signed long"), - (TiffTags.FLOAT, "f", "float"), - (TiffTags.DOUBLE, "d", "double"), - (TiffTags.IFD, "L", "long"), - (TiffTags.LONG8, "Q", "long8"), - ], - ) - ) - - @_register_loader(1, 1) # Basic type, except for the legacy API. - def load_byte(self, data, legacy_api=True): - return data - - @_register_writer(1) # Basic type, except for the legacy API. - def write_byte(self, data): - if isinstance(data, IFDRational): - data = int(data) - if isinstance(data, int): - data = bytes((data,)) - return data - - @_register_loader(2, 1) - def load_string(self, data, legacy_api=True): - if data.endswith(b"\0"): - data = data[:-1] - return data.decode("latin-1", "replace") - - @_register_writer(2) - def write_string(self, value): - # remerge of https://github.com/python-pillow/Pillow/pull/1416 - if isinstance(value, int): - value = str(value) - if not isinstance(value, bytes): - value = value.encode("ascii", "replace") - return value + b"\0" - - @_register_loader(5, 8) - def load_rational(self, data, legacy_api=True): - vals = self._unpack(f"{len(data) // 4}L", data) - - def combine(a, b): - return (a, b) if legacy_api else IFDRational(a, b) - - return tuple(combine(num, denom) for num, denom in zip(vals[::2], vals[1::2])) - - @_register_writer(5) - def write_rational(self, *values): - return b"".join( - self._pack("2L", *_limit_rational(frac, 2**32 - 1)) for frac in values - ) - - @_register_loader(7, 1) - def load_undefined(self, data, legacy_api=True): - return data - - @_register_writer(7) - def write_undefined(self, value): - if isinstance(value, int): - value = str(value).encode("ascii", "replace") - return value - - @_register_loader(10, 8) - def load_signed_rational(self, data, legacy_api=True): - vals = self._unpack(f"{len(data) // 4}l", data) - - def combine(a, b): - return (a, b) if legacy_api else IFDRational(a, b) - - return tuple(combine(num, denom) for num, denom in zip(vals[::2], vals[1::2])) - - @_register_writer(10) - def write_signed_rational(self, *values): - return b"".join( - self._pack("2l", *_limit_signed_rational(frac, 2**31 - 1, -(2**31))) - for frac in values - ) - - def _ensure_read(self, fp, size): - ret = fp.read(size) - if len(ret) != size: - msg = ( - "Corrupt EXIF data. " - f"Expecting to read {size} bytes but only got {len(ret)}. " - ) - raise OSError(msg) - return ret - - def load(self, fp): - self.reset() - self._offset = fp.tell() - - try: - tag_count = ( - self._unpack("Q", self._ensure_read(fp, 8)) - if self._bigtiff - else self._unpack("H", self._ensure_read(fp, 2)) - )[0] - for i in range(tag_count): - tag, typ, count, data = ( - self._unpack("HHQ8s", self._ensure_read(fp, 20)) - if self._bigtiff - else self._unpack("HHL4s", self._ensure_read(fp, 12)) - ) - - tagname = TiffTags.lookup(tag, self.group).name - typname = TYPES.get(typ, "unknown") - msg = f"tag: {tagname} ({tag}) - type: {typname} ({typ})" - - try: - unit_size, handler = self._load_dispatch[typ] - except KeyError: - logger.debug(msg + f" - unsupported type {typ}") - continue # ignore unsupported type - size = count * unit_size - if size > (8 if self._bigtiff else 4): - here = fp.tell() - (offset,) = self._unpack("Q" if self._bigtiff else "L", data) - msg += f" Tag Location: {here} - Data Location: {offset}" - fp.seek(offset) - data = ImageFile._safe_read(fp, size) - fp.seek(here) - else: - data = data[:size] - - if len(data) != size: - warnings.warn( - "Possibly corrupt EXIF data. " - f"Expecting to read {size} bytes but only got {len(data)}." - f" Skipping tag {tag}" - ) - logger.debug(msg) - continue - - if not data: - logger.debug(msg) - continue - - self._tagdata[tag] = data - self.tagtype[tag] = typ - - msg += " - value: " + ( - "" % size if size > 32 else repr(data) - ) - logger.debug(msg) - - (self.next,) = ( - self._unpack("Q", self._ensure_read(fp, 8)) - if self._bigtiff - else self._unpack("L", self._ensure_read(fp, 4)) - ) - except OSError as msg: - warnings.warn(str(msg)) - return - - def tobytes(self, offset=0): - # FIXME What about tagdata? - result = self._pack("H", len(self._tags_v2)) - - entries = [] - offset = offset + len(result) + len(self._tags_v2) * 12 + 4 - stripoffsets = None - - # pass 1: convert tags to binary format - # always write tags in ascending order - for tag, value in sorted(self._tags_v2.items()): - if tag == STRIPOFFSETS: - stripoffsets = len(entries) - typ = self.tagtype.get(tag) - logger.debug(f"Tag {tag}, Type: {typ}, Value: {repr(value)}") - is_ifd = typ == TiffTags.LONG and isinstance(value, dict) - if is_ifd: - if self._endian == "<": - ifh = b"II\x2A\x00\x08\x00\x00\x00" - else: - ifh = b"MM\x00\x2A\x00\x00\x00\x08" - ifd = ImageFileDirectory_v2(ifh, group=tag) - values = self._tags_v2[tag] - for ifd_tag, ifd_value in values.items(): - ifd[ifd_tag] = ifd_value - data = ifd.tobytes(offset) - else: - values = value if isinstance(value, tuple) else (value,) - data = self._write_dispatch[typ](self, *values) - - tagname = TiffTags.lookup(tag, self.group).name - typname = "ifd" if is_ifd else TYPES.get(typ, "unknown") - msg = f"save: {tagname} ({tag}) - type: {typname} ({typ})" - msg += " - value: " + ( - "" % len(data) if len(data) >= 16 else str(values) - ) - logger.debug(msg) - - # count is sum of lengths for string and arbitrary data - if is_ifd: - count = 1 - elif typ in [TiffTags.BYTE, TiffTags.ASCII, TiffTags.UNDEFINED]: - count = len(data) - else: - count = len(values) - # figure out if data fits into the entry - if len(data) <= 4: - entries.append((tag, typ, count, data.ljust(4, b"\0"), b"")) - else: - entries.append((tag, typ, count, self._pack("L", offset), data)) - offset += (len(data) + 1) // 2 * 2 # pad to word - - # update strip offset data to point beyond auxiliary data - if stripoffsets is not None: - tag, typ, count, value, data = entries[stripoffsets] - if data: - msg = "multistrip support not yet implemented" - raise NotImplementedError(msg) - value = self._pack("L", self._unpack("L", value)[0] + offset) - entries[stripoffsets] = tag, typ, count, value, data - - # pass 2: write entries to file - for tag, typ, count, value, data in entries: - logger.debug(f"{tag} {typ} {count} {repr(value)} {repr(data)}") - result += self._pack("HHL4s", tag, typ, count, value) - - # -- overwrite here for multi-page -- - result += b"\0\0\0\0" # end of entries - - # pass 3: write auxiliary data to file - for tag, typ, count, value, data in entries: - result += data - if len(data) & 1: - result += b"\0" - - return result - - def save(self, fp): - if fp.tell() == 0: # skip TIFF header on subsequent pages - # tiff header -- PIL always starts the first IFD at offset 8 - fp.write(self._prefix + self._pack("HL", 42, 8)) - - offset = fp.tell() - result = self.tobytes(offset) - fp.write(result) - return offset + len(result) - - -ImageFileDirectory_v2._load_dispatch = _load_dispatch -ImageFileDirectory_v2._write_dispatch = _write_dispatch -for idx, name in TYPES.items(): - name = name.replace(" ", "_") - setattr(ImageFileDirectory_v2, "load_" + name, _load_dispatch[idx][1]) - setattr(ImageFileDirectory_v2, "write_" + name, _write_dispatch[idx]) -del _load_dispatch, _write_dispatch, idx, name - - -# Legacy ImageFileDirectory support. -class ImageFileDirectory_v1(ImageFileDirectory_v2): - """This class represents the **legacy** interface to a TIFF tag directory. - - Exposes a dictionary interface of the tags in the directory:: - - ifd = ImageFileDirectory_v1() - ifd[key] = 'Some Data' - ifd.tagtype[key] = TiffTags.ASCII - print(ifd[key]) - ('Some Data',) - - Also contains a dictionary of tag types as read from the tiff image file, - :attr:`~PIL.TiffImagePlugin.ImageFileDirectory_v1.tagtype`. - - Values are returned as a tuple. - - .. deprecated:: 3.0.0 - """ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self._legacy_api = True - - tags = property(lambda self: self._tags_v1) - tagdata = property(lambda self: self._tagdata) - - # defined in ImageFileDirectory_v2 - tagtype: dict - """Dictionary of tag types""" - - @classmethod - def from_v2(cls, original): - """Returns an - :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v1` - instance with the same data as is contained in the original - :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v2` - instance. - - :returns: :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v1` - - """ - - ifd = cls(prefix=original.prefix) - ifd._tagdata = original._tagdata - ifd.tagtype = original.tagtype - ifd.next = original.next # an indicator for multipage tiffs - return ifd - - def to_v2(self): - """Returns an - :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v2` - instance with the same data as is contained in the original - :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v1` - instance. - - :returns: :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v2` - - """ - - ifd = ImageFileDirectory_v2(prefix=self.prefix) - ifd._tagdata = dict(self._tagdata) - ifd.tagtype = dict(self.tagtype) - ifd._tags_v2 = dict(self._tags_v2) - return ifd - - def __contains__(self, tag): - return tag in self._tags_v1 or tag in self._tagdata - - def __len__(self): - return len(set(self._tagdata) | set(self._tags_v1)) - - def __iter__(self): - return iter(set(self._tagdata) | set(self._tags_v1)) - - def __setitem__(self, tag, value): - for legacy_api in (False, True): - self._setitem(tag, value, legacy_api) - - def __getitem__(self, tag): - if tag not in self._tags_v1: # unpack on the fly - data = self._tagdata[tag] - typ = self.tagtype[tag] - size, handler = self._load_dispatch[typ] - for legacy in (False, True): - self._setitem(tag, handler(self, data, legacy), legacy) - val = self._tags_v1[tag] - if not isinstance(val, (tuple, bytes)): - val = (val,) - return val - - -# undone -- switch this pointer when IFD_LEGACY_API == False -ImageFileDirectory = ImageFileDirectory_v1 - - -## -# Image plugin for TIFF files. - - -class TiffImageFile(ImageFile.ImageFile): - format = "TIFF" - format_description = "Adobe TIFF" - _close_exclusive_fp_after_loading = False - - def __init__(self, fp=None, filename=None): - self.tag_v2 = None - """ Image file directory (tag dictionary) """ - - self.tag = None - """ Legacy tag entries """ - - super().__init__(fp, filename) - - def _open(self): - """Open the first image in a TIFF file""" - - # Header - ifh = self.fp.read(8) - if ifh[2] == 43: - ifh += self.fp.read(8) - - self.tag_v2 = ImageFileDirectory_v2(ifh) - - # legacy IFD entries will be filled in later - self.ifd = None - - # setup frame pointers - self.__first = self.__next = self.tag_v2.next - self.__frame = -1 - self._fp = self.fp - self._frame_pos = [] - self._n_frames = None - - logger.debug("*** TiffImageFile._open ***") - logger.debug(f"- __first: {self.__first}") - logger.debug(f"- ifh: {repr(ifh)}") # Use repr to avoid str(bytes) - - # and load the first frame - self._seek(0) - - @property - def n_frames(self): - if self._n_frames is None: - current = self.tell() - self._seek(len(self._frame_pos)) - while self._n_frames is None: - self._seek(self.tell() + 1) - self.seek(current) - return self._n_frames - - def seek(self, frame): - """Select a given frame as current image""" - if not self._seek_check(frame): - return - self._seek(frame) - # Create a new core image object on second and - # subsequent frames in the image. Image may be - # different size/mode. - Image._decompression_bomb_check(self.size) - self.im = Image.core.new(self.mode, self.size) - - def _seek(self, frame): - self.fp = self._fp - - # reset buffered io handle in case fp - # was passed to libtiff, invalidating the buffer - self.fp.tell() - - while len(self._frame_pos) <= frame: - if not self.__next: - msg = "no more images in TIFF file" - raise EOFError(msg) - logger.debug( - f"Seeking to frame {frame}, on frame {self.__frame}, " - f"__next {self.__next}, location: {self.fp.tell()}" - ) - self.fp.seek(self.__next) - self._frame_pos.append(self.__next) - logger.debug("Loading tags, location: %s" % self.fp.tell()) - self.tag_v2.load(self.fp) - if self.tag_v2.next in self._frame_pos: - # This IFD has already been processed - # Declare this to be the end of the image - self.__next = 0 - else: - self.__next = self.tag_v2.next - if self.__next == 0: - self._n_frames = frame + 1 - if len(self._frame_pos) == 1: - self.is_animated = self.__next != 0 - self.__frame += 1 - self.fp.seek(self._frame_pos[frame]) - self.tag_v2.load(self.fp) - self._reload_exif() - # fill the legacy tag/ifd entries - self.tag = self.ifd = ImageFileDirectory_v1.from_v2(self.tag_v2) - self.__frame = frame - self._setup() - - def tell(self): - """Return the current frame number""" - return self.__frame - - def getxmp(self): - """ - Returns a dictionary containing the XMP tags. - Requires defusedxml to be installed. - - :returns: XMP tags in a dictionary. - """ - return self._getxmp(self.tag_v2[XMP]) if XMP in self.tag_v2 else {} - - def get_photoshop_blocks(self): - """ - Returns a dictionary of Photoshop "Image Resource Blocks". - The keys are the image resource ID. For more information, see - https://www.adobe.com/devnet-apps/photoshop/fileformatashtml/#50577409_pgfId-1037727 - - :returns: Photoshop "Image Resource Blocks" in a dictionary. - """ - blocks = {} - val = self.tag_v2.get(ExifTags.Base.ImageResources) - if val: - while val[:4] == b"8BIM": - id = i16(val[4:6]) - n = math.ceil((val[6] + 1) / 2) * 2 - size = i32(val[6 + n : 10 + n]) - data = val[10 + n : 10 + n + size] - blocks[id] = {"data": data} - - val = val[math.ceil((10 + n + size) / 2) * 2 :] - return blocks - - def load(self): - if self.tile and self.use_load_libtiff: - return self._load_libtiff() - return super().load() - - def load_end(self): - if self._tile_orientation: - method = { - 2: Image.Transpose.FLIP_LEFT_RIGHT, - 3: Image.Transpose.ROTATE_180, - 4: Image.Transpose.FLIP_TOP_BOTTOM, - 5: Image.Transpose.TRANSPOSE, - 6: Image.Transpose.ROTATE_270, - 7: Image.Transpose.TRANSVERSE, - 8: Image.Transpose.ROTATE_90, - }.get(self._tile_orientation) - if method is not None: - self.im = self.im.transpose(method) - self._size = self.im.size - - # allow closing if we're on the first frame, there's no next - # This is the ImageFile.load path only, libtiff specific below. - if not self.is_animated: - self._close_exclusive_fp_after_loading = True - - # reset buffered io handle in case fp - # was passed to libtiff, invalidating the buffer - self.fp.tell() - - # load IFD data from fp before it is closed - exif = self.getexif() - for key in TiffTags.TAGS_V2_GROUPS: - if key not in exif: - continue - exif.get_ifd(key) - - def _load_libtiff(self): - """Overload method triggered when we detect a compressed tiff - Calls out to libtiff""" - - Image.Image.load(self) - - self.load_prepare() - - if not len(self.tile) == 1: - msg = "Not exactly one tile" - raise OSError(msg) - - # (self._compression, (extents tuple), - # 0, (rawmode, self._compression, fp)) - extents = self.tile[0][1] - args = list(self.tile[0][3]) - - # To be nice on memory footprint, if there's a - # file descriptor, use that instead of reading - # into a string in python. - try: - fp = hasattr(self.fp, "fileno") and self.fp.fileno() - # flush the file descriptor, prevents error on pypy 2.4+ - # should also eliminate the need for fp.tell - # in _seek - if hasattr(self.fp, "flush"): - self.fp.flush() - except OSError: - # io.BytesIO have a fileno, but returns an OSError if - # it doesn't use a file descriptor. - fp = False - - if fp: - args[2] = fp - - decoder = Image._getdecoder( - self.mode, "libtiff", tuple(args), self.decoderconfig - ) - try: - decoder.setimage(self.im, extents) - except ValueError as e: - msg = "Couldn't set the image" - raise OSError(msg) from e - - close_self_fp = self._exclusive_fp and not self.is_animated - if hasattr(self.fp, "getvalue"): - # We've got a stringio like thing passed in. Yay for all in memory. - # The decoder needs the entire file in one shot, so there's not - # a lot we can do here other than give it the entire file. - # unless we could do something like get the address of the - # underlying string for stringio. - # - # Rearranging for supporting byteio items, since they have a fileno - # that returns an OSError if there's no underlying fp. Easier to - # deal with here by reordering. - logger.debug("have getvalue. just sending in a string from getvalue") - n, err = decoder.decode(self.fp.getvalue()) - elif fp: - # we've got a actual file on disk, pass in the fp. - logger.debug("have fileno, calling fileno version of the decoder.") - if not close_self_fp: - self.fp.seek(0) - # 4 bytes, otherwise the trace might error out - n, err = decoder.decode(b"fpfp") - else: - # we have something else. - logger.debug("don't have fileno or getvalue. just reading") - self.fp.seek(0) - # UNDONE -- so much for that buffer size thing. - n, err = decoder.decode(self.fp.read()) - - self.tile = [] - self.readonly = 0 - - self.load_end() - - if close_self_fp: - self.fp.close() - self.fp = None # might be shared - - if err < 0: - raise OSError(err) - - return Image.Image.load(self) - - def _setup(self): - """Setup this image object based on current tags""" - - if 0xBC01 in self.tag_v2: - msg = "Windows Media Photo files not yet supported" - raise OSError(msg) - - # extract relevant tags - self._compression = COMPRESSION_INFO[self.tag_v2.get(COMPRESSION, 1)] - self._planar_configuration = self.tag_v2.get(PLANAR_CONFIGURATION, 1) - - # photometric is a required tag, but not everyone is reading - # the specification - photo = self.tag_v2.get(PHOTOMETRIC_INTERPRETATION, 0) - - # old style jpeg compression images most certainly are YCbCr - if self._compression == "tiff_jpeg": - photo = 6 - - fillorder = self.tag_v2.get(FILLORDER, 1) - - logger.debug("*** Summary ***") - logger.debug(f"- compression: {self._compression}") - logger.debug(f"- photometric_interpretation: {photo}") - logger.debug(f"- planar_configuration: {self._planar_configuration}") - logger.debug(f"- fill_order: {fillorder}") - logger.debug(f"- YCbCr subsampling: {self.tag.get(YCBCRSUBSAMPLING)}") - - # size - xsize = int(self.tag_v2.get(IMAGEWIDTH)) - ysize = int(self.tag_v2.get(IMAGELENGTH)) - self._size = xsize, ysize - - logger.debug(f"- size: {self.size}") - - sample_format = self.tag_v2.get(SAMPLEFORMAT, (1,)) - if len(sample_format) > 1 and max(sample_format) == min(sample_format) == 1: - # SAMPLEFORMAT is properly per band, so an RGB image will - # be (1,1,1). But, we don't support per band pixel types, - # and anything more than one band is a uint8. So, just - # take the first element. Revisit this if adding support - # for more exotic images. - sample_format = (1,) - - bps_tuple = self.tag_v2.get(BITSPERSAMPLE, (1,)) - extra_tuple = self.tag_v2.get(EXTRASAMPLES, ()) - if photo in (2, 6, 8): # RGB, YCbCr, LAB - bps_count = 3 - elif photo == 5: # CMYK - bps_count = 4 - else: - bps_count = 1 - bps_count += len(extra_tuple) - bps_actual_count = len(bps_tuple) - samples_per_pixel = self.tag_v2.get( - SAMPLESPERPIXEL, - 3 if self._compression == "tiff_jpeg" and photo in (2, 6) else 1, - ) - - if samples_per_pixel > MAX_SAMPLESPERPIXEL: - # DOS check, samples_per_pixel can be a Long, and we extend the tuple below - logger.error( - "More samples per pixel than can be decoded: %s", samples_per_pixel - ) - msg = "Invalid value for samples per pixel" - raise SyntaxError(msg) - - if samples_per_pixel < bps_actual_count: - # If a file has more values in bps_tuple than expected, - # remove the excess. - bps_tuple = bps_tuple[:samples_per_pixel] - elif samples_per_pixel > bps_actual_count and bps_actual_count == 1: - # If a file has only one value in bps_tuple, when it should have more, - # presume it is the same number of bits for all of the samples. - bps_tuple = bps_tuple * samples_per_pixel - - if len(bps_tuple) != samples_per_pixel: - msg = "unknown data organization" - raise SyntaxError(msg) - - # mode: check photometric interpretation and bits per pixel - key = ( - self.tag_v2.prefix, - photo, - sample_format, - fillorder, - bps_tuple, - extra_tuple, - ) - logger.debug(f"format key: {key}") - try: - self.mode, rawmode = OPEN_INFO[key] - except KeyError as e: - logger.debug("- unsupported format") - msg = "unknown pixel mode" - raise SyntaxError(msg) from e - - logger.debug(f"- raw mode: {rawmode}") - logger.debug(f"- pil mode: {self.mode}") - - self.info["compression"] = self._compression - - xres = self.tag_v2.get(X_RESOLUTION, 1) - yres = self.tag_v2.get(Y_RESOLUTION, 1) - - if xres and yres: - resunit = self.tag_v2.get(RESOLUTION_UNIT) - if resunit == 2: # dots per inch - self.info["dpi"] = (xres, yres) - elif resunit == 3: # dots per centimeter. convert to dpi - self.info["dpi"] = (xres * 2.54, yres * 2.54) - elif resunit is None: # used to default to 1, but now 2) - self.info["dpi"] = (xres, yres) - # For backward compatibility, - # we also preserve the old behavior - self.info["resolution"] = xres, yres - else: # No absolute unit of measurement - self.info["resolution"] = xres, yres - - # build tile descriptors - x = y = layer = 0 - self.tile = [] - self.use_load_libtiff = READ_LIBTIFF or self._compression != "raw" - if self.use_load_libtiff: - # Decoder expects entire file as one tile. - # There's a buffer size limit in load (64k) - # so large g4 images will fail if we use that - # function. - # - # Setup the one tile for the whole image, then - # use the _load_libtiff function. - - # libtiff handles the fillmode for us, so 1;IR should - # actually be 1;I. Including the R double reverses the - # bits, so stripes of the image are reversed. See - # https://github.com/python-pillow/Pillow/issues/279 - if fillorder == 2: - # Replace fillorder with fillorder=1 - key = key[:3] + (1,) + key[4:] - logger.debug(f"format key: {key}") - # this should always work, since all the - # fillorder==2 modes have a corresponding - # fillorder=1 mode - self.mode, rawmode = OPEN_INFO[key] - # libtiff always returns the bytes in native order. - # we're expecting image byte order. So, if the rawmode - # contains I;16, we need to convert from native to image - # byte order. - if rawmode == "I;16": - rawmode = "I;16N" - if ";16B" in rawmode: - rawmode = rawmode.replace(";16B", ";16N") - if ";16L" in rawmode: - rawmode = rawmode.replace(";16L", ";16N") - - # YCbCr images with new jpeg compression with pixels in one plane - # unpacked straight into RGB values - if ( - photo == 6 - and self._compression == "jpeg" - and self._planar_configuration == 1 - ): - rawmode = "RGB" - - # Offset in the tile tuple is 0, we go from 0,0 to - # w,h, and we only do this once -- eds - a = (rawmode, self._compression, False, self.tag_v2.offset) - self.tile.append(("libtiff", (0, 0, xsize, ysize), 0, a)) - - elif STRIPOFFSETS in self.tag_v2 or TILEOFFSETS in self.tag_v2: - # striped image - if STRIPOFFSETS in self.tag_v2: - offsets = self.tag_v2[STRIPOFFSETS] - h = self.tag_v2.get(ROWSPERSTRIP, ysize) - w = self.size[0] - else: - # tiled image - offsets = self.tag_v2[TILEOFFSETS] - w = self.tag_v2.get(TILEWIDTH) - h = self.tag_v2.get(TILELENGTH) - - for offset in offsets: - if x + w > xsize: - stride = w * sum(bps_tuple) / 8 # bytes per line - else: - stride = 0 - - tile_rawmode = rawmode - if self._planar_configuration == 2: - # each band on it's own layer - tile_rawmode = rawmode[layer] - # adjust stride width accordingly - stride /= bps_count - - a = (tile_rawmode, int(stride), 1) - self.tile.append( - ( - self._compression, - (x, y, min(x + w, xsize), min(y + h, ysize)), - offset, - a, - ) - ) - x = x + w - if x >= self.size[0]: - x, y = 0, y + h - if y >= self.size[1]: - x = y = 0 - layer += 1 - else: - logger.debug("- unsupported data organization") - msg = "unknown data organization" - raise SyntaxError(msg) - - # Fix up info. - if ICCPROFILE in self.tag_v2: - self.info["icc_profile"] = self.tag_v2[ICCPROFILE] - - # fixup palette descriptor - - if self.mode in ["P", "PA"]: - palette = [o8(b // 256) for b in self.tag_v2[COLORMAP]] - self.palette = ImagePalette.raw("RGB;L", b"".join(palette)) - - self._tile_orientation = self.tag_v2.get(ExifTags.Base.Orientation) - - -# -# -------------------------------------------------------------------- -# Write TIFF files - -# little endian is default except for image modes with -# explicit big endian byte-order - -SAVE_INFO = { - # mode => rawmode, byteorder, photometrics, - # sampleformat, bitspersample, extra - "1": ("1", II, 1, 1, (1,), None), - "L": ("L", II, 1, 1, (8,), None), - "LA": ("LA", II, 1, 1, (8, 8), 2), - "P": ("P", II, 3, 1, (8,), None), - "PA": ("PA", II, 3, 1, (8, 8), 2), - "I": ("I;32S", II, 1, 2, (32,), None), - "I;16": ("I;16", II, 1, 1, (16,), None), - "I;16S": ("I;16S", II, 1, 2, (16,), None), - "F": ("F;32F", II, 1, 3, (32,), None), - "RGB": ("RGB", II, 2, 1, (8, 8, 8), None), - "RGBX": ("RGBX", II, 2, 1, (8, 8, 8, 8), 0), - "RGBA": ("RGBA", II, 2, 1, (8, 8, 8, 8), 2), - "CMYK": ("CMYK", II, 5, 1, (8, 8, 8, 8), None), - "YCbCr": ("YCbCr", II, 6, 1, (8, 8, 8), None), - "LAB": ("LAB", II, 8, 1, (8, 8, 8), None), - "I;32BS": ("I;32BS", MM, 1, 2, (32,), None), - "I;16B": ("I;16B", MM, 1, 1, (16,), None), - "I;16BS": ("I;16BS", MM, 1, 2, (16,), None), - "F;32BF": ("F;32BF", MM, 1, 3, (32,), None), -} - - -def _save(im, fp, filename): - try: - rawmode, prefix, photo, format, bits, extra = SAVE_INFO[im.mode] - except KeyError as e: - msg = f"cannot write mode {im.mode} as TIFF" - raise OSError(msg) from e - - ifd = ImageFileDirectory_v2(prefix=prefix) - - encoderinfo = im.encoderinfo - encoderconfig = im.encoderconfig - try: - compression = encoderinfo["compression"] - except KeyError: - compression = im.info.get("compression") - if isinstance(compression, int): - # compression value may be from BMP. Ignore it - compression = None - if compression is None: - compression = "raw" - elif compression == "tiff_jpeg": - # OJPEG is obsolete, so use new-style JPEG compression instead - compression = "jpeg" - elif compression == "tiff_deflate": - compression = "tiff_adobe_deflate" - - libtiff = WRITE_LIBTIFF or compression != "raw" - - # required for color libtiff images - ifd[PLANAR_CONFIGURATION] = 1 - - ifd[IMAGEWIDTH] = im.size[0] - ifd[IMAGELENGTH] = im.size[1] - - # write any arbitrary tags passed in as an ImageFileDirectory - if "tiffinfo" in encoderinfo: - info = encoderinfo["tiffinfo"] - elif "exif" in encoderinfo: - info = encoderinfo["exif"] - if isinstance(info, bytes): - exif = Image.Exif() - exif.load(info) - info = exif - else: - info = {} - logger.debug("Tiffinfo Keys: %s" % list(info)) - if isinstance(info, ImageFileDirectory_v1): - info = info.to_v2() - for key in info: - if isinstance(info, Image.Exif) and key in TiffTags.TAGS_V2_GROUPS: - ifd[key] = info.get_ifd(key) - else: - ifd[key] = info.get(key) - try: - ifd.tagtype[key] = info.tagtype[key] - except Exception: - pass # might not be an IFD. Might not have populated type - - # additions written by Greg Couch, gregc@cgl.ucsf.edu - # inspired by image-sig posting from Kevin Cazabon, kcazabon@home.com - if hasattr(im, "tag_v2"): - # preserve tags from original TIFF image file - for key in ( - RESOLUTION_UNIT, - X_RESOLUTION, - Y_RESOLUTION, - IPTC_NAA_CHUNK, - PHOTOSHOP_CHUNK, - XMP, - ): - if key in im.tag_v2: - ifd[key] = im.tag_v2[key] - ifd.tagtype[key] = im.tag_v2.tagtype[key] - - # preserve ICC profile (should also work when saving other formats - # which support profiles as TIFF) -- 2008-06-06 Florian Hoech - icc = encoderinfo.get("icc_profile", im.info.get("icc_profile")) - if icc: - ifd[ICCPROFILE] = icc - - for key, name in [ - (IMAGEDESCRIPTION, "description"), - (X_RESOLUTION, "resolution"), - (Y_RESOLUTION, "resolution"), - (X_RESOLUTION, "x_resolution"), - (Y_RESOLUTION, "y_resolution"), - (RESOLUTION_UNIT, "resolution_unit"), - (SOFTWARE, "software"), - (DATE_TIME, "date_time"), - (ARTIST, "artist"), - (COPYRIGHT, "copyright"), - ]: - if name in encoderinfo: - ifd[key] = encoderinfo[name] - - dpi = encoderinfo.get("dpi") - if dpi: - ifd[RESOLUTION_UNIT] = 2 - ifd[X_RESOLUTION] = dpi[0] - ifd[Y_RESOLUTION] = dpi[1] - - if bits != (1,): - ifd[BITSPERSAMPLE] = bits - if len(bits) != 1: - ifd[SAMPLESPERPIXEL] = len(bits) - if extra is not None: - ifd[EXTRASAMPLES] = extra - if format != 1: - ifd[SAMPLEFORMAT] = format - - if PHOTOMETRIC_INTERPRETATION not in ifd: - ifd[PHOTOMETRIC_INTERPRETATION] = photo - elif im.mode in ("1", "L") and ifd[PHOTOMETRIC_INTERPRETATION] == 0: - if im.mode == "1": - inverted_im = im.copy() - px = inverted_im.load() - for y in range(inverted_im.height): - for x in range(inverted_im.width): - px[x, y] = 0 if px[x, y] == 255 else 255 - im = inverted_im - else: - im = ImageOps.invert(im) - - if im.mode in ["P", "PA"]: - lut = im.im.getpalette("RGB", "RGB;L") - colormap = [] - colors = len(lut) // 3 - for i in range(3): - colormap += [v * 256 for v in lut[colors * i : colors * (i + 1)]] - colormap += [0] * (256 - colors) - ifd[COLORMAP] = colormap - # data orientation - stride = len(bits) * ((im.size[0] * bits[0] + 7) // 8) - # aim for given strip size (64 KB by default) when using libtiff writer - if libtiff: - im_strip_size = encoderinfo.get("strip_size", STRIP_SIZE) - rows_per_strip = 1 if stride == 0 else min(im_strip_size // stride, im.size[1]) - # JPEG encoder expects multiple of 8 rows - if compression == "jpeg": - rows_per_strip = min(((rows_per_strip + 7) // 8) * 8, im.size[1]) - else: - rows_per_strip = im.size[1] - if rows_per_strip == 0: - rows_per_strip = 1 - strip_byte_counts = 1 if stride == 0 else stride * rows_per_strip - strips_per_image = (im.size[1] + rows_per_strip - 1) // rows_per_strip - ifd[ROWSPERSTRIP] = rows_per_strip - if strip_byte_counts >= 2**16: - ifd.tagtype[STRIPBYTECOUNTS] = TiffTags.LONG - ifd[STRIPBYTECOUNTS] = (strip_byte_counts,) * (strips_per_image - 1) + ( - stride * im.size[1] - strip_byte_counts * (strips_per_image - 1), - ) - ifd[STRIPOFFSETS] = tuple( - range(0, strip_byte_counts * strips_per_image, strip_byte_counts) - ) # this is adjusted by IFD writer - # no compression by default: - ifd[COMPRESSION] = COMPRESSION_INFO_REV.get(compression, 1) - - if im.mode == "YCbCr": - for tag, value in { - YCBCRSUBSAMPLING: (1, 1), - REFERENCEBLACKWHITE: (0, 255, 128, 255, 128, 255), - }.items(): - ifd.setdefault(tag, value) - - blocklist = [TILEWIDTH, TILELENGTH, TILEOFFSETS, TILEBYTECOUNTS] - if libtiff: - if "quality" in encoderinfo: - quality = encoderinfo["quality"] - if not isinstance(quality, int) or quality < 0 or quality > 100: - msg = "Invalid quality setting" - raise ValueError(msg) - if compression != "jpeg": - msg = "quality setting only supported for 'jpeg' compression" - raise ValueError(msg) - ifd[JPEGQUALITY] = quality - - logger.debug("Saving using libtiff encoder") - logger.debug("Items: %s" % sorted(ifd.items())) - _fp = 0 - if hasattr(fp, "fileno"): - try: - fp.seek(0) - _fp = os.dup(fp.fileno()) - except io.UnsupportedOperation: - pass - - # optional types for non core tags - types = {} - # STRIPOFFSETS and STRIPBYTECOUNTS are added by the library - # based on the data in the strip. - # The other tags expect arrays with a certain length (fixed or depending on - # BITSPERSAMPLE, etc), passing arrays with a different length will result in - # segfaults. Block these tags until we add extra validation. - # SUBIFD may also cause a segfault. - blocklist += [ - REFERENCEBLACKWHITE, - STRIPBYTECOUNTS, - STRIPOFFSETS, - TRANSFERFUNCTION, - SUBIFD, - ] - - # bits per sample is a single short in the tiff directory, not a list. - atts = {BITSPERSAMPLE: bits[0]} - # Merge the ones that we have with (optional) more bits from - # the original file, e.g x,y resolution so that we can - # save(load('')) == original file. - legacy_ifd = {} - if hasattr(im, "tag"): - legacy_ifd = im.tag.to_v2() - - # SAMPLEFORMAT is determined by the image format and should not be copied - # from legacy_ifd. - supplied_tags = {**getattr(im, "tag_v2", {}), **legacy_ifd} - if SAMPLEFORMAT in supplied_tags: - del supplied_tags[SAMPLEFORMAT] - - for tag, value in itertools.chain(ifd.items(), supplied_tags.items()): - # Libtiff can only process certain core items without adding - # them to the custom dictionary. - # Custom items are supported for int, float, unicode, string and byte - # values. Other types and tuples require a tagtype. - if tag not in TiffTags.LIBTIFF_CORE: - if not getattr(Image.core, "libtiff_support_custom_tags", False): - continue - - if tag in ifd.tagtype: - types[tag] = ifd.tagtype[tag] - elif not (isinstance(value, (int, float, str, bytes))): - continue - else: - type = TiffTags.lookup(tag).type - if type: - types[tag] = type - if tag not in atts and tag not in blocklist: - if isinstance(value, str): - atts[tag] = value.encode("ascii", "replace") + b"\0" - elif isinstance(value, IFDRational): - atts[tag] = float(value) - else: - atts[tag] = value - - if SAMPLEFORMAT in atts and len(atts[SAMPLEFORMAT]) == 1: - atts[SAMPLEFORMAT] = atts[SAMPLEFORMAT][0] - - logger.debug("Converted items: %s" % sorted(atts.items())) - - # libtiff always expects the bytes in native order. - # we're storing image byte order. So, if the rawmode - # contains I;16, we need to convert from native to image - # byte order. - if im.mode in ("I;16B", "I;16"): - rawmode = "I;16N" - - # Pass tags as sorted list so that the tags are set in a fixed order. - # This is required by libtiff for some tags. For example, the JPEGQUALITY - # pseudo tag requires that the COMPRESS tag was already set. - tags = list(atts.items()) - tags.sort() - a = (rawmode, compression, _fp, filename, tags, types) - e = Image._getencoder(im.mode, "libtiff", a, encoderconfig) - e.setimage(im.im, (0, 0) + im.size) - while True: - # undone, change to self.decodermaxblock: - errcode, data = e.encode(16 * 1024)[1:] - if not _fp: - fp.write(data) - if errcode: - break - if _fp: - try: - os.close(_fp) - except OSError: - pass - if errcode < 0: - msg = f"encoder error {errcode} when writing image file" - raise OSError(msg) - - else: - for tag in blocklist: - del ifd[tag] - offset = ifd.save(fp) - - ImageFile._save( - im, fp, [("raw", (0, 0) + im.size, offset, (rawmode, stride, 1))] - ) - - # -- helper for multi-page save -- - if "_debug_multipage" in encoderinfo: - # just to access o32 and o16 (using correct byte order) - im._debug_multipage = ifd - - -class AppendingTiffWriter: - fieldSizes = [ - 0, # None - 1, # byte - 1, # ascii - 2, # short - 4, # long - 8, # rational - 1, # sbyte - 1, # undefined - 2, # sshort - 4, # slong - 8, # srational - 4, # float - 8, # double - 4, # ifd - 2, # unicode - 4, # complex - 8, # long8 - ] - - # StripOffsets = 273 - # FreeOffsets = 288 - # TileOffsets = 324 - # JPEGQTables = 519 - # JPEGDCTables = 520 - # JPEGACTables = 521 - Tags = {273, 288, 324, 519, 520, 521} - - def __init__(self, fn, new=False): - if hasattr(fn, "read"): - self.f = fn - self.close_fp = False - else: - self.name = fn - self.close_fp = True - try: - self.f = open(fn, "w+b" if new else "r+b") - except OSError: - self.f = open(fn, "w+b") - self.beginning = self.f.tell() - self.setup() - - def setup(self): - # Reset everything. - self.f.seek(self.beginning, os.SEEK_SET) - - self.whereToWriteNewIFDOffset = None - self.offsetOfNewPage = 0 - - self.IIMM = iimm = self.f.read(4) - if not iimm: - # empty file - first page - self.isFirst = True - return - - self.isFirst = False - if iimm == b"II\x2a\x00": - self.setEndian("<") - elif iimm == b"MM\x00\x2a": - self.setEndian(">") - else: - msg = "Invalid TIFF file header" - raise RuntimeError(msg) - - self.skipIFDs() - self.goToEnd() - - def finalize(self): - if self.isFirst: - return - - # fix offsets - self.f.seek(self.offsetOfNewPage) - - iimm = self.f.read(4) - if not iimm: - # msg = "nothing written into new page" - # raise RuntimeError(msg) - # Make it easy to finish a frame without committing to a new one. - return - - if iimm != self.IIMM: - msg = "IIMM of new page doesn't match IIMM of first page" - raise RuntimeError(msg) - - ifd_offset = self.readLong() - ifd_offset += self.offsetOfNewPage - self.f.seek(self.whereToWriteNewIFDOffset) - self.writeLong(ifd_offset) - self.f.seek(ifd_offset) - self.fixIFD() - - def newFrame(self): - # Call this to finish a frame. - self.finalize() - self.setup() - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, traceback): - if self.close_fp: - self.close() - return False - - def tell(self): - return self.f.tell() - self.offsetOfNewPage - - def seek(self, offset, whence=io.SEEK_SET): - if whence == os.SEEK_SET: - offset += self.offsetOfNewPage - - self.f.seek(offset, whence) - return self.tell() - - def goToEnd(self): - self.f.seek(0, os.SEEK_END) - pos = self.f.tell() - - # pad to 16 byte boundary - pad_bytes = 16 - pos % 16 - if 0 < pad_bytes < 16: - self.f.write(bytes(pad_bytes)) - self.offsetOfNewPage = self.f.tell() - - def setEndian(self, endian): - self.endian = endian - self.longFmt = self.endian + "L" - self.shortFmt = self.endian + "H" - self.tagFormat = self.endian + "HHL" - - def skipIFDs(self): - while True: - ifd_offset = self.readLong() - if ifd_offset == 0: - self.whereToWriteNewIFDOffset = self.f.tell() - 4 - break - - self.f.seek(ifd_offset) - num_tags = self.readShort() - self.f.seek(num_tags * 12, os.SEEK_CUR) - - def write(self, data): - return self.f.write(data) - - def readShort(self): - (value,) = struct.unpack(self.shortFmt, self.f.read(2)) - return value - - def readLong(self): - (value,) = struct.unpack(self.longFmt, self.f.read(4)) - return value - - def rewriteLastShortToLong(self, value): - self.f.seek(-2, os.SEEK_CUR) - bytes_written = self.f.write(struct.pack(self.longFmt, value)) - if bytes_written is not None and bytes_written != 4: - msg = f"wrote only {bytes_written} bytes but wanted 4" - raise RuntimeError(msg) - - def rewriteLastShort(self, value): - self.f.seek(-2, os.SEEK_CUR) - bytes_written = self.f.write(struct.pack(self.shortFmt, value)) - if bytes_written is not None and bytes_written != 2: - msg = f"wrote only {bytes_written} bytes but wanted 2" - raise RuntimeError(msg) - - def rewriteLastLong(self, value): - self.f.seek(-4, os.SEEK_CUR) - bytes_written = self.f.write(struct.pack(self.longFmt, value)) - if bytes_written is not None and bytes_written != 4: - msg = f"wrote only {bytes_written} bytes but wanted 4" - raise RuntimeError(msg) - - def writeShort(self, value): - bytes_written = self.f.write(struct.pack(self.shortFmt, value)) - if bytes_written is not None and bytes_written != 2: - msg = f"wrote only {bytes_written} bytes but wanted 2" - raise RuntimeError(msg) - - def writeLong(self, value): - bytes_written = self.f.write(struct.pack(self.longFmt, value)) - if bytes_written is not None and bytes_written != 4: - msg = f"wrote only {bytes_written} bytes but wanted 4" - raise RuntimeError(msg) - - def close(self): - self.finalize() - self.f.close() - - def fixIFD(self): - num_tags = self.readShort() - - for i in range(num_tags): - tag, field_type, count = struct.unpack(self.tagFormat, self.f.read(8)) - - field_size = self.fieldSizes[field_type] - total_size = field_size * count - is_local = total_size <= 4 - if not is_local: - offset = self.readLong() - offset += self.offsetOfNewPage - self.rewriteLastLong(offset) - - if tag in self.Tags: - cur_pos = self.f.tell() - - if is_local: - self.fixOffsets( - count, isShort=(field_size == 2), isLong=(field_size == 4) - ) - self.f.seek(cur_pos + 4) - else: - self.f.seek(offset) - self.fixOffsets( - count, isShort=(field_size == 2), isLong=(field_size == 4) - ) - self.f.seek(cur_pos) - - offset = cur_pos = None - - elif is_local: - # skip the locally stored value that is not an offset - self.f.seek(4, os.SEEK_CUR) - - def fixOffsets(self, count, isShort=False, isLong=False): - if not isShort and not isLong: - msg = "offset is neither short nor long" - raise RuntimeError(msg) - - for i in range(count): - offset = self.readShort() if isShort else self.readLong() - offset += self.offsetOfNewPage - if isShort and offset >= 65536: - # offset is now too large - we must convert shorts to longs - if count != 1: - msg = "not implemented" - raise RuntimeError(msg) # XXX TODO - - # simple case - the offset is just one and therefore it is - # local (not referenced with another offset) - self.rewriteLastShortToLong(offset) - self.f.seek(-10, os.SEEK_CUR) - self.writeShort(TiffTags.LONG) # rewrite the type to LONG - self.f.seek(8, os.SEEK_CUR) - elif isShort: - self.rewriteLastShort(offset) - else: - self.rewriteLastLong(offset) - - -def _save_all(im, fp, filename): - encoderinfo = im.encoderinfo.copy() - encoderconfig = im.encoderconfig - append_images = list(encoderinfo.get("append_images", [])) - if not hasattr(im, "n_frames") and not append_images: - return _save(im, fp, filename) - - cur_idx = im.tell() - try: - with AppendingTiffWriter(fp) as tf: - for ims in [im] + append_images: - ims.encoderinfo = encoderinfo - ims.encoderconfig = encoderconfig - if not hasattr(ims, "n_frames"): - nfr = 1 - else: - nfr = ims.n_frames - - for idx in range(nfr): - ims.seek(idx) - ims.load() - _save(ims, tf, filename) - tf.newFrame() - finally: - im.seek(cur_idx) - - -# -# -------------------------------------------------------------------- -# Register - -Image.register_open(TiffImageFile.format, TiffImageFile, _accept) -Image.register_save(TiffImageFile.format, _save) -Image.register_save_all(TiffImageFile.format, _save_all) - -Image.register_extensions(TiffImageFile.format, [".tif", ".tiff"]) - -Image.register_mime(TiffImageFile.format, "image/tiff") diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/dev/packaging/gen_wheel_index.sh b/spaces/carlosalonso/Detection-video/carpeta_deteccion/dev/packaging/gen_wheel_index.sh deleted file mode 100644 index ec96a27d809fe87ad963f3ffa7147ca4afbc1711..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/dev/packaging/gen_wheel_index.sh +++ /dev/null @@ -1,46 +0,0 @@ -#!/bin/bash -e -# Copyright (c) Facebook, Inc. and its affiliates. - - -root=$(readlink -f $1) -if [[ -z "$root" ]]; then - echo "Usage: ./gen_wheel_index.sh /absolute/path/to/wheels" - exit -fi - -export LC_ALL=C # reproducible sort -# NOTE: all sort in this script might not work when xx.10 is released - -index=$root/index.html - -cd "$root" -for cu in cpu cu92 cu100 cu101 cu102 cu110 cu111 cu113; do - mkdir -p "$root/$cu" - cd "$root/$cu" - echo "Creating $PWD/index.html ..." - # First sort by torch version, then stable sort by d2 version with unique. - # As a result, the latest torch version for each d2 version is kept. - for whl in $(find -type f -name '*.whl' -printf '%P\n' \ - | sort -k 1 -r | sort -t '/' -k 2 --stable -r --unique); do - echo "$whl
    " - done > index.html - - - for torch in torch*; do - cd "$root/$cu/$torch" - - # list all whl for each cuda,torch version - echo "Creating $PWD/index.html ..." - for whl in $(find . -type f -name '*.whl' -printf '%P\n' | sort -r); do - echo "$whl
    " - done > index.html - done -done - -cd "$root" -# Just list everything: -echo "Creating $index ..." -for whl in $(find . -type f -name '*.whl' -printf '%P\n' | sort -r); do - echo "$whl
    " -done > "$index" - diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/ViTDet/configs/LVIS/mask_rcnn_vitdet_h_100ep.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/ViTDet/configs/LVIS/mask_rcnn_vitdet_h_100ep.py deleted file mode 100644 index fd82ff9710f0ca28ec9d21880e2f14d53410f3d0..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/ViTDet/configs/LVIS/mask_rcnn_vitdet_h_100ep.py +++ /dev/null @@ -1,28 +0,0 @@ -from functools import partial - -from detectron2.modeling.backbone.vit import get_vit_lr_decay_rate - -from .mask_rcnn_vitdet_b_100ep import ( - dataloader, - lr_multiplier, - model, - train, - optimizer, -) - -train.init_checkpoint = "detectron2://ImageNetPretrained/MAE/mae_pretrain_vit_huge_p14to16.pth" - -model.backbone.net.embed_dim = 1280 -model.backbone.net.depth = 32 -model.backbone.net.num_heads = 16 -model.backbone.net.drop_path_rate = 0.4 -# 7, 15, 23, 31 for global attention -model.backbone.net.window_block_indexes = ( - list(range(0, 7)) + list(range(8, 15)) + list(range(16, 23)) + list(range(24, 31)) -) - - -optimizer.lr = 1e-4 -optimizer.params.lr_factor_func = partial(get_vit_lr_decay_rate, lr_decay_rate=0.9, num_layers=32) -optimizer.params.overrides = {} -optimizer.params.weight_decay_norm = None diff --git a/spaces/chanhi0603/Create_subtitles_for_videos_ChatGPT/pyinstaller.sh b/spaces/chanhi0603/Create_subtitles_for_videos_ChatGPT/pyinstaller.sh deleted file mode 100644 index d9d9d8bc35bb17f575f972e203f656d587181ab5..0000000000000000000000000000000000000000 --- a/spaces/chanhi0603/Create_subtitles_for_videos_ChatGPT/pyinstaller.sh +++ /dev/null @@ -1 +0,0 @@ -pyinstaller --onefile --name gpttranslator --paths=.venv/lib/python3.10/site-packages/ gpttranslator.py diff --git a/spaces/chendl/compositional_test/transformers/examples/flax/language-modeling/README.md b/spaces/chendl/compositional_test/transformers/examples/flax/language-modeling/README.md deleted file mode 100644 index 5346904d84c688a405005341266a8d1eb2e595fb..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/flax/language-modeling/README.md +++ /dev/null @@ -1,544 +0,0 @@ - - -# Language model training examples - -The following example showcases how to train a language model from scratch -using the JAX/Flax backend. - -JAX/Flax allows you to trace pure functions and compile them into efficient, fused accelerator code on both GPU and TPU. -Models written in JAX/Flax are **immutable** and updated in a purely functional -way which enables simple and efficient model parallelism. - -## Masked language modeling - -In the following, we demonstrate how to train a bi-directional transformer model -using masked language modeling objective as introduced in [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805). -More specifically, we demonstrate how JAX/Flax can be leveraged -to pre-train [**`roberta-base`**](https://huggingface.co/roberta-base) -in Norwegian on a single TPUv3-8 pod. - -The example script uses the 🤗 Datasets library. You can easily customize them to your needs if you need extra processing on your datasets. - -To setup all relevant files for training, let's create a directory. - -```bash -mkdir ./norwegian-roberta-base -``` - -### Train tokenizer - -In the first step, we train a tokenizer to efficiently process the text input for the model. Similar to how it is shown in [How to train a new language model from scratch using Transformers and Tokenizers](https://huggingface.co/blog/how-to-train), we use a **`ByteLevelBPETokenizer`**. -The tokenizer is trained on the complete Norwegian dataset of OSCAR -and consequently saved in the cloned model directory. -This can take up to 10 minutes depending on your hardware ☕. - -```python -from datasets import load_dataset -from tokenizers import trainers, Tokenizer, normalizers, ByteLevelBPETokenizer - -# load dataset -dataset = load_dataset("oscar", "unshuffled_deduplicated_no", split="train") - -# Instantiate tokenizer -tokenizer = ByteLevelBPETokenizer() - -def batch_iterator(batch_size=1000): - for i in range(0, len(dataset), batch_size): - yield dataset[i: i + batch_size]["text"] - -# Customized training -tokenizer.train_from_iterator(batch_iterator(), vocab_size=50265, min_frequency=2, special_tokens=[ - "", - "", - "", - "", - "", -]) - -# Save files to disk -tokenizer.save("./norwegian-roberta-base/tokenizer.json") -``` - -### Create configuration - -Next, we create the model's configuration file. This is as simple -as loading and storing [`**roberta-base**`](https://huggingface.co/roberta-base) -in the local model folder: - -```python -from transformers import RobertaConfig - -config = RobertaConfig.from_pretrained("roberta-base", vocab_size=50265) -config.save_pretrained("./norwegian-roberta-base") -``` - -Great, we have set up our model repository. During training, we will automatically -push the training logs and model weights to the repo. - -### Train model - -Next we can run the example script to pretrain the model: - -```bash -python run_mlm_flax.py \ - --output_dir="./norwegian-roberta-base" \ - --model_type="roberta" \ - --config_name="./norwegian-roberta-base" \ - --tokenizer_name="./norwegian-roberta-base" \ - --dataset_name="oscar" \ - --dataset_config_name="unshuffled_deduplicated_no" \ - --max_seq_length="128" \ - --weight_decay="0.01" \ - --per_device_train_batch_size="128" \ - --per_device_eval_batch_size="128" \ - --learning_rate="3e-4" \ - --warmup_steps="1000" \ - --overwrite_output_dir \ - --num_train_epochs="18" \ - --adam_beta1="0.9" \ - --adam_beta2="0.98" \ - --logging_steps="500" \ - --save_steps="2500" \ - --eval_steps="2500" \ - --push_to_hub -``` - -Training should converge at a loss and accuracy -of 1.78 and 0.64 respectively after 18 epochs on a single TPUv3-8. -This should take less than 18 hours. -Training statistics can be accessed on [tfhub.dev](https://tensorboard.dev/experiment/GdYmdak2TWeVz0DDRYOrrg). - -For a step-by-step walkthrough of how to do masked language modeling in Flax, please have a -look at [this](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb) google colab. - -## Causal language modeling - -In the following, we demonstrate how to train an auto-regressive causal transformer model -in JAX/Flax. -More specifically, we pretrain a randomly initialized [**`gpt2`**](https://huggingface.co/gpt2) model in Norwegian on a single TPUv3-8. -to pre-train 124M [**`gpt2`**](https://huggingface.co/gpt2) -in Norwegian on a single TPUv3-8 pod. - -The example script uses the 🤗 Datasets library. You can easily customize them to your needs if you need extra processing on your datasets. - - -To setup all relevant files for training, let's create a directory. - -```bash -mkdir ./norwegian-gpt2 -``` - -### Train tokenizer - -In the first step, we train a tokenizer to efficiently process the text input for the model. Similar to how it is shown in [How to train a new language model from scratch using Transformers and Tokenizers](https://huggingface.co/blog/how-to-train), we use a **`ByteLevelBPETokenizer`**. -The tokenizer is trained on the complete Norwegian dataset of OSCAR -and consequently saved in the cloned model directory. -This can take up to 10 minutes depending on your hardware ☕. - -```python -from datasets import load_dataset -from tokenizers import trainers, Tokenizer, normalizers, ByteLevelBPETokenizer - -# load dataset -dataset = load_dataset("oscar", "unshuffled_deduplicated_no", split="train") - -# Instantiate tokenizer -tokenizer = ByteLevelBPETokenizer() - -def batch_iterator(batch_size=1000): - for i in range(0, len(dataset), batch_size): - yield dataset[i: i + batch_size]["text"] - -# Customized training -tokenizer.train_from_iterator(batch_iterator(), vocab_size=50257, min_frequency=2, special_tokens=[ - "", - "", - "", - "", - "", -]) - -# Save files to disk -tokenizer.save("./norwegian-gpt2/tokenizer.json") -``` - -### Create configuration - -Next, we create the model's configuration file. This is as simple -as loading and storing [`**gpt2**`](https://huggingface.co/gpt2) -in the local model folder: - -```python -from transformers import GPT2Config - -config = GPT2Config.from_pretrained("gpt2", resid_pdrop=0.0, embd_pdrop=0.0, attn_pdrop=0.0, vocab_size=50257) -config.save_pretrained("./norwegian-gpt2") -``` - -Great, we have set up our model repository. During training, we will now automatically -push the training logs and model weights to the repo. - -### Train model - -Finally, we can run the example script to pretrain the model: - -```bash -python run_clm_flax.py \ - --output_dir="./norwegian-gpt2" \ - --model_type="gpt2" \ - --config_name="./norwegian-gpt2" \ - --tokenizer_name="./norwegian-gpt2" \ - --dataset_name="oscar" \ - --dataset_config_name="unshuffled_deduplicated_no" \ - --do_train --do_eval \ - --block_size="512" \ - --per_device_train_batch_size="64" \ - --per_device_eval_batch_size="64" \ - --learning_rate="5e-3" --warmup_steps="1000" \ - --adam_beta1="0.9" --adam_beta2="0.98" --weight_decay="0.01" \ - --overwrite_output_dir \ - --num_train_epochs="20" \ - --logging_steps="500" \ - --save_steps="2500" \ - --eval_steps="2500" \ - --push_to_hub -``` - -Training should converge at a loss and perplexity -of 3.24 and 25.72 respectively after 20 epochs on a single TPUv3-8. -This should take less than ~21 hours. -Training statistics can be accessed on [tfhub.de](https://tensorboard.dev/experiment/2zEhLwJ0Qp2FAkI3WVH9qA). - -For a step-by-step walkthrough of how to do causal language modeling in Flax, please have a -look at [this](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/causal_language_modeling_flax.ipynb) google colab. - -## T5-like span-masked language modeling - -In the following, we demonstrate how to train a T5 model using the span-masked language model -objective as proposed in the [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683). -More specifically, we demonstrate how JAX/Flax can be leveraged -to pre-train [**`google/t5-v1_1-base`**](https://huggingface.co/google/t5-v1_1-base) -in Norwegian on a single TPUv3-8 pod. - -The example script uses the 🤗 Datasets library. You can easily customize them to your needs if you need extra processing on your datasets. - -Let's start by creating a model repository to save the trained model and logs. -Here we call the model `"norwegian-t5-base"`, but you can change the model name as you like. - -To setup all relevant files for training, let's create a directory. - -```bash -cd ./norwegian-t5-base -``` - -### Train tokenizer - -In the first step, we train a tokenizer to efficiently process the text input for the model. -We make use of the [tokenizers](https://github.com/huggingface/tokenizers) library to train -a sentencepiece unigram tokenizer as shown in [t5_tokenizer_model.py](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling/t5_tokenizer_model.py) -which is heavily inspired from [yandex-research/DeDLOC's tokenizer model](https://github.com/yandex-research/DeDLOC/blob/5c994bc64e573702a9a79add3ecd68b38f14b548/sahajbert/tokenizer/tokenizer_model.py) . - -The tokenizer is trained on the complete Norwegian dataset of OSCAR -and consequently saved in the cloned model directory. -This can take up to 120 minutes depending on your hardware ☕☕☕ . - -```python -import datasets - -from t5_tokenizer_model import SentencePieceUnigramTokenizer - - -vocab_size = 32_000 -input_sentence_size = None - -# Initialize a dataset -dataset = datasets.load_dataset("oscar", name="unshuffled_deduplicated_no", split="train") - -tokenizer = SentencePieceUnigramTokenizer(unk_token="", eos_token="", pad_token="") - - -# Build an iterator over this dataset -def batch_iterator(input_sentence_size=None): - if input_sentence_size is None: - input_sentence_size = len(dataset) - batch_length = 100 - for i in range(0, input_sentence_size, batch_length): - yield dataset[i: i + batch_length]["text"] - - -# Train tokenizer -tokenizer.train_from_iterator( - iterator=batch_iterator(input_sentence_size=input_sentence_size), - vocab_size=vocab_size, - show_progress=True, -) - -# Save files to disk -tokenizer.save("./norwegian-t5-base/tokenizer.json") -``` - -### Create configuration - -Next, we create the model's configuration file. This is as simple -as loading and storing [`**google/t5-v1_1-base**`](https://huggingface.co/google/t5-v1_1-base) -in the local model folder: - -```python -from transformers import T5Config - -config = T5Config.from_pretrained("google/t5-v1_1-base", vocab_size=tokenizer.get_vocab_size()) -config.save_pretrained("./norwegian-t5-base") -``` - -Great, we have set up our model repository. During training, we will automatically -push the training logs and model weights to the repo. - -### Train model - -Next we can run the example script to pretrain the model: - -```bash -python run_t5_mlm_flax.py \ - --output_dir="./norwegian-t5-base" \ - --model_type="t5" \ - --config_name="./norwegian-t5-base" \ - --tokenizer_name="./norwegian-t5-base" \ - --dataset_name="oscar" \ - --dataset_config_name="unshuffled_deduplicated_no" \ - --max_seq_length="512" \ - --per_device_train_batch_size="32" \ - --per_device_eval_batch_size="32" \ - --adafactor \ - --learning_rate="0.005" \ - --weight_decay="0.001" \ - --warmup_steps="2000" \ - --overwrite_output_dir \ - --logging_steps="500" \ - --save_steps="10000" \ - --eval_steps="2500" \ - --push_to_hub -``` - -Training should converge at a loss and accuracy -of 2.36 and 57.0 respectively after 3 epochs on a single TPUv3-8. -This should take around 4.5 hours. -Training statistics can be accessed on directly on the 🤗 [hub](https://huggingface.co/patrickvonplaten/t5-base-norwegian/tensorboard) - -## BART: Denoising language modeling - -In the following, we demonstrate how to train a BART model -using denoising language modeling objective as introduced in [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461). -More specifically, we demonstrate how JAX/Flax can be leveraged -to pre-train [**`bart-base`**](https://huggingface.co/facebook/bart-base) -in Norwegian on a single TPUv3-8 pod. - -The example script uses the 🤗 Datasets library. You can easily customize them to your needs if you need extra processing on your datasets. - -To setup all relevant files for training, let's create a directory. - -```bash -mkdir ./norwegian-bart-base -``` - -### Train tokenizer -In the first step, we train a tokenizer to efficiently process the text input for the model. Similar to how it is shown in [How to train a new language model from scratch using Transformers and Tokenizers](https://huggingface.co/blog/how-to-train), we use a **`ByteLevelBPETokenizer`**. -The tokenizer is trained on the complete Norwegian dataset of OSCAR -and consequently saved in the cloned model directory. -This can take up to 10 minutes depending on your hardware ☕. - -```python -from datasets import load_dataset -from tokenizers import trainers, Tokenizer, normalizers, ByteLevelBPETokenizer - -# load dataset -dataset = load_dataset("oscar", "unshuffled_deduplicated_no", split="train") - -# Instantiate tokenizer -tokenizer = ByteLevelBPETokenizer() - -def batch_iterator(batch_size=1000): - for i in range(0, len(dataset), batch_size): - yield dataset[i: i + batch_size]["text"] - -# Customized training -tokenizer.train_from_iterator(batch_iterator(), vocab_size=50265, min_frequency=2, special_tokens=[ - "", - "", - "", - "", - "", -]) - -# Save files to disk -tokenizer.save("./norwegian-bart-base/tokenizer.json") -``` - -### Create configuration - -Next, we create the model's configuration file. This is as simple -as loading and storing [`**facebook/bart-base**`](https://huggingface.co/facebook/bart-base) -in the local model folder: - -```python -from transformers import BartConfig -config = BartConfig.from_pretrained("facebook/bart-base", vocab_size=50265) -config.save_pretrained("./norwegian-bart-base") -``` - -Great, we have set up our model repository. During training, we will automatically -push the training logs and model weights to the repo. - -### Train model - -Next we can run the example script to pretrain the model: - -```bash -python run_bart_dlm_flax.py \ - --output_dir="./norwegian-bart-base" \ - --config_name="./norwegian-bart-base" \ - --tokenizer_name="./norwegian-bart-base" \ - --dataset_name="oscar" \ - --dataset_config_name="unshuffled_deduplicated_no" \ - --max_seq_length="1024" \ - --per_device_train_batch_size="32" \ - --per_device_eval_batch_size="32" \ - --learning_rate="1e-4" \ - --warmup_steps="2000" \ - --overwrite_output_dir \ - --logging_steps="500" \ - --save_steps="2000" \ - --eval_steps="2000" \ - --push_to_hub -``` - -Training should converge at a loss and accuracy -of 1.36 and 0.77 respectively after 3 epochs on a single TPUv3-8. -This should take less than 6 hours. -Training statistics can be accessed on [tfhub.dev](https://tensorboard.dev/experiment/Maw62QlaSXWS0MOf2V2lbg/). - -## Runtime evaluation - -We also ran masked language modeling using PyTorch/XLA on a TPUv3-8, and PyTorch on 8 V100 GPUs. We report the -overall training time below. -For reproducibility, we state the training commands used for PyTorch/XLA and PyTorch further below. - -| Task | [TPU v3-8 (Flax)](https://tensorboard.dev/experiment/GdYmdak2TWeVz0DDRYOrrg/) | [TPU v3-8 (Pytorch/XLA)](https://tensorboard.dev/experiment/7Jq1kcQQRAmy12KOdXek7A/)| [8 GPU (PyTorch)](https://tensorboard.dev/experiment/PJneV8FQRxa2unPw1QnVHA) | -|-------|-----------|------------|------------| -| MLM | 15h32m | 23h46m | 44h14m | - -*All experiments are ran on Google Cloud Platform. -GPU experiments are ran without further optimizations besides JAX -transformations. GPU experiments are ran with full precision (fp32). "TPU v3-8" -are 8 TPU cores on 4 chips (each chips has 2 cores), while "8 GPU" are 8 GPU chips. - -### Script to run MLM with PyTorch/XLA on TPUv3-8 - -For comparison one can run the same pre-training with PyTorch/XLA on TPU. To set up PyTorch/XLA on Cloud TPU VMs, please -refer to [this](https://cloud.google.com/tpu/docs/pytorch-xla-ug-tpu-vm) guide. -Having created the tokenzier and configuration in `norwegian-roberta-base`, we create the following symbolic links: - -```bash -ln -s ~/transformers/examples/pytorch/language-modeling/run_mlm.py ./ -ln -s ~/transformers/examples/pytorch/xla_spawn.py ./ -``` - -, set the following environment variables: - -```bash -export XRT_TPU_CONFIG="localservice;0;localhost:51011" -unset LD_PRELOAD - -export NUM_TPUS=8 -export TOKENIZERS_PARALLELISM=0 -export MODEL_DIR="./norwegian-roberta-base" -mkdir -p ${MODEL_DIR} -``` - -, and start training as follows: - -```bash -python3 xla_spawn.py --num_cores ${NUM_TPUS} run_mlm.py --output_dir="./runs" \ - --model_type="roberta" \ - --config_name="${MODEL_DIR}" \ - --tokenizer_name="${MODEL_DIR}" \ - --dataset_name="oscar" \ - --dataset_config_name="unshuffled_deduplicated_no" \ - --max_seq_length="128" \ - --weight_decay="0.01" \ - --per_device_train_batch_size="128" \ - --per_device_eval_batch_size="128" \ - --learning_rate="3e-4" \ - --warmup_steps="1000" \ - --overwrite_output_dir \ - --num_train_epochs="18" \ - --adam_beta1="0.9" \ - --adam_beta2="0.98" \ - --do_train \ - --do_eval \ - --logging_steps="500" \ - --evaluation_strategy="epoch" \ - --report_to="tensorboard" \ - --save_strategy="no" -``` - -### Script to compare pre-training with PyTorch on 8 GPU V100's - -For comparison you can run the same pre-training with PyTorch on GPU. Note that we have to make use of `gradient_accumulation` -because the maximum batch size that fits on a single V100 GPU is 32 instead of 128. -Having created the tokenzier and configuration in `norwegian-roberta-base`, we create the following symbolic links: - -```bash -ln -s ~/transformers/examples/pytorch/language-modeling/run_mlm.py ./ -``` - -, set some environment variables: - -```bash -export NUM_GPUS=8 -export TOKENIZERS_PARALLELISM=0 -export MODEL_DIR="./norwegian-roberta-base" -mkdir -p ${MODEL_DIR} -``` - -, and can start training as follows: - -```bash -python3 -m torch.distributed.launch --nproc_per_node ${NUM_GPUS} run_mlm.py \ - --output_dir="${MODEL_DIR}" \ - --model_type="roberta" \ - --config_name="${MODEL_DIR}" \ - --tokenizer_name="${MODEL_DIR}" \ - --dataset_name="oscar" \ - --dataset_config_name="unshuffled_deduplicated_no" \ - --max_seq_length="128" \ - --weight_decay="0.01" \ - --per_device_train_batch_size="32" \ - --per_device_eval_batch_size="32" \ - --gradient_accumulation="4" \ - --learning_rate="3e-4" \ - --warmup_steps="1000" \ - --overwrite_output_dir \ - --num_train_epochs="18" \ - --adam_beta1="0.9" \ - --adam_beta2="0.98" \ - --do_train \ - --do_eval \ - --logging_steps="500" \ - --evaluation_strategy="steps" \ - --report_to="tensorboard" \ - --save_strategy="no" -``` diff --git a/spaces/chendl/compositional_test/transformers/src/transformers/models/bart/tokenization_bart_fast.py b/spaces/chendl/compositional_test/transformers/src/transformers/models/bart/tokenization_bart_fast.py deleted file mode 100644 index f05ed1b7a82d5da0c67cb9bbb569d5acb8fff8ed..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/src/transformers/models/bart/tokenization_bart_fast.py +++ /dev/null @@ -1,306 +0,0 @@ -# coding=utf-8 -# Copyright 2020 The Facebook AI Research Team Authors and The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import json -from typing import List, Optional, Tuple - -from tokenizers import pre_tokenizers, processors - -from ...tokenization_utils_base import AddedToken, BatchEncoding -from ...tokenization_utils_fast import PreTrainedTokenizerFast -from ...utils import logging -from .tokenization_bart import BartTokenizer - - -logger = logging.get_logger(__name__) - - -VOCAB_FILES_NAMES = {"vocab_file": "vocab.json", "merges_file": "merges.txt", "tokenizer_file": "tokenizer.json"} - -# See all BART models at https://huggingface.co/models?filter=bart -PRETRAINED_VOCAB_FILES_MAP = { - "vocab_file": { - "facebook/bart-base": "https://huggingface.co/facebook/bart-base/resolve/main/vocab.json", - "facebook/bart-large": "https://huggingface.co/facebook/bart-large/resolve/main/vocab.json", - "facebook/bart-large-mnli": "https://huggingface.co/facebook/bart-large-mnli/resolve/main/vocab.json", - "facebook/bart-large-cnn": "https://huggingface.co/facebook/bart-large-cnn/resolve/main/vocab.json", - "facebook/bart-large-xsum": "https://huggingface.co/facebook/bart-large-xsum/resolve/main/vocab.json", - "yjernite/bart_eli5": "https://huggingface.co/yjernite/bart_eli5/resolve/main/vocab.json", - }, - "merges_file": { - "facebook/bart-base": "https://huggingface.co/facebook/bart-base/resolve/main/merges.txt", - "facebook/bart-large": "https://huggingface.co/facebook/bart-large/resolve/main/merges.txt", - "facebook/bart-large-mnli": "https://huggingface.co/facebook/bart-large-mnli/resolve/main/merges.txt", - "facebook/bart-large-cnn": "https://huggingface.co/facebook/bart-large-cnn/resolve/main/merges.txt", - "facebook/bart-large-xsum": "https://huggingface.co/facebook/bart-large-xsum/resolve/main/merges.txt", - "yjernite/bart_eli5": "https://huggingface.co/yjernite/bart_eli5/resolve/main/merges.txt", - }, - "tokenizer_file": { - "facebook/bart-base": "https://huggingface.co/facebook/bart-base/resolve/main/tokenizer.json", - "facebook/bart-large": "https://huggingface.co/facebook/bart-large/resolve/main/tokenizer.json", - "facebook/bart-large-mnli": "https://huggingface.co/facebook/bart-large-mnli/resolve/main/tokenizer.json", - "facebook/bart-large-cnn": "https://huggingface.co/facebook/bart-large-cnn/resolve/main/tokenizer.json", - "facebook/bart-large-xsum": "https://huggingface.co/facebook/bart-large-xsum/resolve/main/tokenizer.json", - "yjernite/bart_eli5": "https://huggingface.co/yjernite/bart_eli5/resolve/main/tokenizer.json", - }, -} - -PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { - "facebook/bart-base": 1024, - "facebook/bart-large": 1024, - "facebook/bart-large-mnli": 1024, - "facebook/bart-large-cnn": 1024, - "facebook/bart-large-xsum": 1024, - "yjernite/bart_eli5": 1024, -} - - -class BartTokenizerFast(PreTrainedTokenizerFast): - r""" - Construct a "fast" BART tokenizer (backed by HuggingFace's *tokenizers* library), derived from the GPT-2 tokenizer, - using byte-level Byte-Pair-Encoding. - - This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will - be encoded differently whether it is at the beginning of the sentence (without space) or not: - - ```python - >>> from transformers import BartTokenizerFast - - >>> tokenizer = BartTokenizerFast.from_pretrained("facebook/bart-base") - >>> tokenizer("Hello world")["input_ids"] - [0, 31414, 232, 2] - - >>> tokenizer(" Hello world")["input_ids"] - [0, 20920, 232, 2] - ``` - - You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you - call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance. - - - - When used with `is_split_into_words=True`, this tokenizer needs to be instantiated with `add_prefix_space=True`. - - - - This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should - refer to this superclass for more information regarding those methods. - - Args: - vocab_file (`str`): - Path to the vocabulary file. - merges_file (`str`): - Path to the merges file. - errors (`str`, *optional*, defaults to `"replace"`): - Paradigm to follow when decoding bytes to UTF-8. See - [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information. - bos_token (`str`, *optional*, defaults to `""`): - The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. - - - - When building a sequence using special tokens, this is not the token that is used for the beginning of - sequence. The token used is the `cls_token`. - - - - eos_token (`str`, *optional*, defaults to `""`): - The end of sequence token. - - - - When building a sequence using special tokens, this is not the token that is used for the end of sequence. - The token used is the `sep_token`. - - - - sep_token (`str`, *optional*, defaults to `""`): - The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for - sequence classification or for a text and a question for question answering. It is also used as the last - token of a sequence built with special tokens. - cls_token (`str`, *optional*, defaults to `""`): - The classifier token which is used when doing sequence classification (classification of the whole sequence - instead of per-token classification). It is the first token of the sequence when built with special tokens. - unk_token (`str`, *optional*, defaults to `""`): - The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this - token instead. - pad_token (`str`, *optional*, defaults to `""`): - The token used for padding, for example when batching sequences of different lengths. - mask_token (`str`, *optional*, defaults to `""`): - The token used for masking values. This is the token used when training this model with masked language - modeling. This is the token which the model will try to predict. - add_prefix_space (`bool`, *optional*, defaults to `False`): - Whether or not to add an initial space to the input. This allows to treat the leading word just as any - other word. (BART tokenizer detect beginning of words by the preceding space). - trim_offsets (`bool`, *optional*, defaults to `True`): - Whether the post processing step should trim offsets to avoid including whitespaces. - """ - vocab_files_names = VOCAB_FILES_NAMES - pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP - max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES - model_input_names = ["input_ids", "attention_mask"] - slow_tokenizer_class = BartTokenizer - - def __init__( - self, - vocab_file=None, - merges_file=None, - tokenizer_file=None, - errors="replace", - bos_token="", - eos_token="", - sep_token="", - cls_token="", - unk_token="", - pad_token="", - mask_token="", - add_prefix_space=False, - trim_offsets=True, - **kwargs, - ): - super().__init__( - vocab_file, - merges_file, - tokenizer_file=tokenizer_file, - errors=errors, - bos_token=bos_token, - eos_token=eos_token, - sep_token=sep_token, - cls_token=cls_token, - unk_token=unk_token, - pad_token=pad_token, - mask_token=mask_token, - add_prefix_space=add_prefix_space, - trim_offsets=trim_offsets, - **kwargs, - ) - - pre_tok_state = json.loads(self.backend_tokenizer.pre_tokenizer.__getstate__()) - if pre_tok_state.get("add_prefix_space", add_prefix_space) != add_prefix_space: - pre_tok_class = getattr(pre_tokenizers, pre_tok_state.pop("type")) - pre_tok_state["add_prefix_space"] = add_prefix_space - self.backend_tokenizer.pre_tokenizer = pre_tok_class(**pre_tok_state) - - self.add_prefix_space = add_prefix_space - - # the pre_tokenizer is already updated in the GPT2TokenizerFast `__init__` - tokenizer_component = "post_processor" - tokenizer_component_instance = getattr(self.backend_tokenizer, tokenizer_component, None) - if tokenizer_component_instance: - state = json.loads(tokenizer_component_instance.__getstate__()) - - # The lists 'sep' and 'cls' must be cased in tuples for the object `post_processor_class` - if "sep" in state: - state["sep"] = tuple(state["sep"]) - if "cls" in state: - state["cls"] = tuple(state["cls"]) - - changes_to_apply = False - - if state.get("add_prefix_space", add_prefix_space) != add_prefix_space: - state["add_prefix_space"] = add_prefix_space - changes_to_apply = True - - if state.get("trim_offsets", trim_offsets) != trim_offsets: - state["trim_offsets"] = trim_offsets - changes_to_apply = True - - if changes_to_apply: - component_class = getattr(processors, state.pop("type")) - new_value = component_class(**state) - setattr(self.backend_tokenizer, tokenizer_component, new_value) - - @property - def mask_token(self) -> str: - """ - `str`: Mask token, to use when training a model with masked-language modeling. Log an error if used while not - having been set. - - BART tokenizer has a special mask token to be usable in the fill-mask pipeline. The mask token will greedily - comprise the space before the **. - """ - if self._mask_token is None: - if self.verbose: - logger.error("Using mask_token, but it is not set yet.") - return None - return str(self._mask_token) - - @mask_token.setter - def mask_token(self, value): - """ - Overriding the default behavior of the mask token to have it eat the space before it. - - This is needed to preserve backward compatibility with all the previously used models based on Bart. - """ - # Mask token behave like a normal word, i.e. include the space before it - # So we set lstrip to True - value = AddedToken(value, lstrip=True, rstrip=False) if isinstance(value, str) else value - self._mask_token = value - - def _batch_encode_plus(self, *args, **kwargs) -> BatchEncoding: - is_split_into_words = kwargs.get("is_split_into_words", False) - - if is_split_into_words and not self.add_prefix_space: - raise ValueError( - f"You need to instantiate {self.__class__.__name__} with add_prefix_space=True " - "to use it with pretokenized inputs." - ) - - return super()._batch_encode_plus(*args, **kwargs) - - def _encode_plus(self, *args, **kwargs) -> BatchEncoding: - is_split_into_words = kwargs.get("is_split_into_words", False) - - if is_split_into_words and not self.add_prefix_space: - raise ValueError( - f"You need to instantiate {self.__class__.__name__} with add_prefix_space=True " - "to use it with pretokenized inputs." - ) - - return super()._encode_plus(*args, **kwargs) - - def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]: - files = self._tokenizer.model.save(save_directory, name=filename_prefix) - return tuple(files) - - def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None): - output = [self.bos_token_id] + token_ids_0 + [self.eos_token_id] - if token_ids_1 is None: - return output - - return output + [self.eos_token_id] + token_ids_1 + [self.eos_token_id] - - def create_token_type_ids_from_sequences( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None - ) -> List[int]: - """ - Create a mask from the two sequences passed to be used in a sequence-pair classification task. BART does not - make use of token type ids, therefore a list of zeros is returned. - - Args: - token_ids_0 (`List[int]`): - List of IDs. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - - Returns: - `List[int]`: List of zeros. - """ - sep = [self.sep_token_id] - cls = [self.cls_token_id] - - if token_ids_1 is None: - return len(cls + token_ids_0 + sep) * [0] - return len(cls + token_ids_0 + sep + sep + token_ids_1 + sep) * [0] diff --git a/spaces/chikoto/Umamusume-DeBERTa-VITS2-TTS-JP/train_ms.py b/spaces/chikoto/Umamusume-DeBERTa-VITS2-TTS-JP/train_ms.py deleted file mode 100644 index 3ac82503eabfba5ba7600945cbd69a68f6ca3f2e..0000000000000000000000000000000000000000 --- a/spaces/chikoto/Umamusume-DeBERTa-VITS2-TTS-JP/train_ms.py +++ /dev/null @@ -1,594 +0,0 @@ -# flake8: noqa: E402 - -import os -import torch -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler -from tqdm import tqdm -import logging - -logging.getLogger("numba").setLevel(logging.WARNING) -import commons -import utils -from data_utils import ( - TextAudioSpeakerLoader, - TextAudioSpeakerCollate, - DistributedBucketSampler, -) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, - DurationDiscriminator, -) -from losses import generator_loss, discriminator_loss, feature_loss, kl_loss -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from text.symbols import symbols - -torch.backends.cuda.matmul.allow_tf32 = True -torch.backends.cudnn.allow_tf32 = ( - True # If encontered training problem,please try to disable TF32. -) -torch.set_float32_matmul_precision("medium") -torch.backends.cudnn.benchmark = True -torch.backends.cuda.sdp_kernel("flash") -torch.backends.cuda.enable_flash_sdp(True) -torch.backends.cuda.enable_mem_efficient_sdp( - True -) # Not available if torch version is lower than 2.0 -torch.backends.cuda.enable_math_sdp(True) -global_step = 0 - - -def run(): - dist.init_process_group( - backend="gloo", - init_method="env://", # Due to some training problem,we proposed to use gloo instead of nccl. - ) # Use torchrun instead of mp.spawn - rank = dist.get_rank() - n_gpus = dist.get_world_size() - hps = utils.get_hpar2ams() - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, - [32, 300, 400, 500, 600, 700, 800, 900, 1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True, - ) - collate_fn = TextAudioSpeakerCollate() - train_loader = DataLoader( - train_dataset, - num_workers=16, - shuffle=False, - pin_memory=True, - collate_fn=collate_fn, - batch_sampler=train_sampler, - persistent_workers=True, - prefetch_factor=4, - ) # DataLoader config could be adjusted. - if rank == 0: - eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data) - eval_loader = DataLoader( - eval_dataset, - num_workers=0, - shuffle=False, - batch_size=1, - pin_memory=True, - drop_last=False, - collate_fn=collate_fn, - ) - if ( - "use_noise_scaled_mas" in hps.model.keys() - and hps.model.use_noise_scaled_mas is True - ): - print("Using noise scaled MAS for VITS2") - mas_noise_scale_initial = 0.01 - noise_scale_delta = 2e-6 - else: - print("Using normal MAS for VITS1") - mas_noise_scale_initial = 0.0 - noise_scale_delta = 0.0 - if ( - "use_duration_discriminator" in hps.model.keys() - and hps.model.use_duration_discriminator is True - ): - print("Using duration discriminator for VITS2") - net_dur_disc = DurationDiscriminator( - hps.model.hidden_channels, - hps.model.hidden_channels, - 3, - 0.1, - gin_channels=hps.model.gin_channels if hps.data.n_speakers != 0 else 0, - ).cuda(rank) - if ( - "use_spk_conditioned_encoder" in hps.model.keys() - and hps.model.use_spk_conditioned_encoder is True - ): - if hps.data.n_speakers == 0: - raise ValueError( - "n_speakers must be > 0 when using spk conditioned encoder to train multi-speaker model" - ) - else: - print("Using normal encoder for VITS1") - - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - mas_noise_scale_initial=mas_noise_scale_initial, - noise_scale_delta=noise_scale_delta, - **hps.model, - ).cuda(rank) - - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - filter(lambda p: p.requires_grad, net_g.parameters()), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps, - ) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps, - ) - if net_dur_disc is not None: - optim_dur_disc = torch.optim.AdamW( - net_dur_disc.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps, - ) - else: - optim_dur_disc = None - net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True) - net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True) - if net_dur_disc is not None: - net_dur_disc = DDP(net_dur_disc, device_ids=[rank], find_unused_parameters=True) - try: - if net_dur_disc is not None: - _, _, dur_resume_lr, epoch_str = utils.load_checkpoint( - utils.latest_checkpoint_path(hps.model_dir, "DUR_*.pth"), - net_dur_disc, - optim_dur_disc, - skip_optimizer=hps.train.skip_optimizer - if "skip_optimizer" in hps.train - else True, - ) - _, optim_g, g_resume_lr, epoch_str = utils.load_checkpoint( - utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), - net_g, - optim_g, - skip_optimizer=hps.train.skip_optimizer - if "skip_optimizer" in hps.train - else True, - ) - _, optim_d, d_resume_lr, epoch_str = utils.load_checkpoint( - utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), - net_d, - optim_d, - skip_optimizer=hps.train.skip_optimizer - if "skip_optimizer" in hps.train - else True, - ) - if not optim_g.param_groups[0].get("initial_lr"): - optim_g.param_groups[0]["initial_lr"] = g_resume_lr - if not optim_d.param_groups[0].get("initial_lr"): - optim_d.param_groups[0]["initial_lr"] = d_resume_lr - if not optim_dur_disc.param_groups[0].get("initial_lr"): - optim_dur_disc.param_groups[0]["initial_lr"] = dur_resume_lr - - epoch_str = max(epoch_str, 1) - global_step = (epoch_str - 1) * len(train_loader) - except Exception as e: - print(e) - epoch_str = 1 - global_step = 0 - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR( - optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2 - ) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR( - optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2 - ) - if net_dur_disc is not None: - if not optim_dur_disc.param_groups[0].get("initial_lr"): - optim_dur_disc.param_groups[0]["initial_lr"] = dur_resume_lr - scheduler_dur_disc = torch.optim.lr_scheduler.ExponentialLR( - optim_dur_disc, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2 - ) - else: - scheduler_dur_disc = None - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank == 0: - train_and_evaluate( - rank, - epoch, - hps, - [net_g, net_d, net_dur_disc], - [optim_g, optim_d, optim_dur_disc], - [scheduler_g, scheduler_d, scheduler_dur_disc], - scaler, - [train_loader, eval_loader], - logger, - [writer, writer_eval], - ) - else: - train_and_evaluate( - rank, - epoch, - hps, - [net_g, net_d, net_dur_disc], - [optim_g, optim_d, optim_dur_disc], - [scheduler_g, scheduler_d, scheduler_dur_disc], - scaler, - [train_loader, None], - None, - None, - ) - scheduler_g.step() - scheduler_d.step() - if net_dur_disc is not None: - scheduler_dur_disc.step() - - -def train_and_evaluate( - rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers -): - net_g, net_d, net_dur_disc = nets - optim_g, optim_d, optim_dur_disc = optims - scheduler_g, scheduler_d, scheduler_dur_disc = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - if net_dur_disc is not None: - net_dur_disc.train() - for batch_idx, ( - x, - x_lengths, - spec, - spec_lengths, - y, - y_lengths, - speakers, - tone, - language, - bert, - ja_bert, - ) in tqdm(enumerate(train_loader)): - if net_g.module.use_noise_scaled_mas: - current_mas_noise_scale = ( - net_g.module.mas_noise_scale_initial - - net_g.module.noise_scale_delta * global_step - ) - net_g.module.current_mas_noise_scale = max(current_mas_noise_scale, 0.0) - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda( - rank, non_blocking=True - ) - spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda( - rank, non_blocking=True - ) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda( - rank, non_blocking=True - ) - speakers = speakers.cuda(rank, non_blocking=True) - tone = tone.cuda(rank, non_blocking=True) - language = language.cuda(rank, non_blocking=True) - bert = bert.cuda(rank, non_blocking=True) - ja_bert = ja_bert.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - ( - y_hat, - l_length, - attn, - ids_slice, - x_mask, - z_mask, - (z, z_p, m_p, logs_p, m_q, logs_q), - (hidden_x, logw, logw_), - ) = net_g( - x, - x_lengths, - spec, - spec_lengths, - speakers, - tone, - language, - bert, - ja_bert, - ) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax, - ) - y_mel = commons.slice_segments( - mel, ids_slice, hps.train.segment_size // hps.data.hop_length - ) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax, - ) - - y = commons.slice_segments( - y, ids_slice * hps.data.hop_length, hps.train.segment_size - ) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss( - y_d_hat_r, y_d_hat_g - ) - loss_disc_all = loss_disc - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc( - hidden_x.detach(), x_mask.detach(), logw.detach(), logw_.detach() - ) - with autocast(enabled=False): - # TODO: I think need to mean using the mask, but for now, just mean all - ( - loss_dur_disc, - losses_dur_disc_r, - losses_dur_disc_g, - ) = discriminator_loss(y_dur_hat_r, y_dur_hat_g) - loss_dur_disc_all = loss_dur_disc - optim_dur_disc.zero_grad() - scaler.scale(loss_dur_disc_all).backward() - scaler.unscale_(optim_dur_disc) - commons.clip_grad_value_(net_dur_disc.parameters(), None) - scaler.step(optim_dur_disc) - - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x, x_mask, logw, logw_) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl - if net_dur_disc is not None: - loss_dur_gen, losses_dur_gen = generator_loss(y_dur_hat_g) - loss_gen_all += loss_dur_gen - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]["lr"] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl] - logger.info( - "Train Epoch: {} [{:.0f}%]".format( - epoch, 100.0 * batch_idx / len(train_loader) - ) - ) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = { - "loss/g/total": loss_gen_all, - "loss/d/total": loss_disc_all, - "learning_rate": lr, - "grad_norm_d": grad_norm_d, - "grad_norm_g": grad_norm_g, - } - scalar_dict.update( - { - "loss/g/fm": loss_fm, - "loss/g/mel": loss_mel, - "loss/g/dur": loss_dur, - "loss/g/kl": loss_kl, - } - ) - scalar_dict.update( - {"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)} - ) - scalar_dict.update( - {"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)} - ) - scalar_dict.update( - {"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)} - ) - - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy( - y_mel[0].data.cpu().numpy() - ), - "slice/mel_gen": utils.plot_spectrogram_to_numpy( - y_hat_mel[0].data.cpu().numpy() - ), - "all/mel": utils.plot_spectrogram_to_numpy( - mel[0].data.cpu().numpy() - ), - "all/attn": utils.plot_alignment_to_numpy( - attn[0, 0].data.cpu().numpy() - ), - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict, - ) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint( - net_g, - optim_g, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(global_step)), - ) - utils.save_checkpoint( - net_d, - optim_d, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(global_step)), - ) - if net_dur_disc is not None: - utils.save_checkpoint( - net_dur_disc, - optim_dur_disc, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "DUR_{}.pth".format(global_step)), - ) - keep_ckpts = getattr(hps.train, "keep_ckpts", 5) - if keep_ckpts > 0: - utils.clean_checkpoints( - path_to_models=hps.model_dir, - n_ckpts_to_keep=keep_ckpts, - sort_by_time=True, - ) - - global_step += 1 - - if rank == 0: - logger.info("====> Epoch: {}".format(epoch)) - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - image_dict = {} - audio_dict = {} - print("Evaluating ...") - with torch.no_grad(): - for batch_idx, ( - x, - x_lengths, - spec, - spec_lengths, - y, - y_lengths, - speakers, - tone, - language, - bert, - ja_bert, - ) in enumerate(eval_loader): - x, x_lengths = x.cuda(), x_lengths.cuda() - spec, spec_lengths = spec.cuda(), spec_lengths.cuda() - y, y_lengths = y.cuda(), y_lengths.cuda() - speakers = speakers.cuda() - bert = bert.cuda() - ja_bert = ja_bert.cuda() - tone = tone.cuda() - language = language.cuda() - for use_sdp in [True, False]: - y_hat, attn, mask, *_ = generator.module.infer( - x, - x_lengths, - speakers, - tone, - language, - bert, - ja_bert, - y=spec, - max_len=1000, - sdp_ratio=0.0 if not use_sdp else 1.0, - ) - y_hat_lengths = mask.sum([1, 2]).long() * hps.data.hop_length - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax, - ) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax, - ) - image_dict.update( - { - f"gen/mel_{batch_idx}": utils.plot_spectrogram_to_numpy( - y_hat_mel[0].cpu().numpy() - ) - } - ) - audio_dict.update( - { - f"gen/audio_{batch_idx}_{use_sdp}": y_hat[ - 0, :, : y_hat_lengths[0] - ] - } - ) - image_dict.update( - { - f"gt/mel_{batch_idx}": utils.plot_spectrogram_to_numpy( - mel[0].cpu().numpy() - ) - } - ) - audio_dict.update({f"gt/audio_{batch_idx}": y[0, :, : y_lengths[0]]}) - - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate, - ) - generator.train() - - -if __name__ == "__main__": - run() diff --git a/spaces/chinhon/frequent_word_counter/README.md b/spaces/chinhon/frequent_word_counter/README.md deleted file mode 100644 index 4134bd1d130fa2f44c132b7a1f8e07bd00c5a9bc..0000000000000000000000000000000000000000 --- a/spaces/chinhon/frequent_word_counter/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Frequent_word_counter -emoji: 📚 -colorFrom: yellow -colorTo: red -sdk: streamlit -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-a2b4a4fc.js b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-a2b4a4fc.js deleted file mode 100644 index bd79c0ad28fd92051e1200ed3683332dae1e134d..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-a2b4a4fc.js +++ /dev/null @@ -1,4 +0,0 @@ -const VERSION_RE = new RegExp("3.36.1/", "g");function import_fix(mod, base) {const url = new URL(mod, base); return import(`https://gradio.s3-us-west-2.amazonaws.com/3.36.1/${url.pathname?.startsWith('/') ? url.pathname.substring(1).replace(VERSION_RE, "") : url.pathname.replace(VERSION_RE, "")}`);}import{S as ve,e as Ae,s as ye,J as Ve,K as b,p as E,M as U,n as x,A as V,N as H,O as G,U as S,Q as B,X as se,af as Ee,a1 as ge,P as K,R as Z,G as Pe,m as he,V as _l,z as P,u as ue,v as R,y as re,B as Te,ag as Ul,k as D,o as C,x as j,ah as zl,h as Be,ai as Il,_ as Ne,F as O,T as Re,aj as dl,j as ml,t as cl,a9 as Nl,ab as Ol,ac as Dl,ad as Cl,E as jl,ae as Kl,q as Ql,r as Yl}from"./index-f877dfd5.js";import"./Blocks-adc2d4ca.js";import{U as ql}from"./UploadText-8aae32a4.js";import{a as bl,B as Xl}from"./Button-11a87b79.js";import{U as Gl}from"./Upload-3aa22eef.js";import{M as Zl}from"./ModifyUpload-87f877d6.js";import{B as gl}from"./BlockLabel-7929e88d.js";import{E as Jl}from"./Empty-2159e5e9.js";import{S as Wl,u as xl}from"./ShareButton-cdd94184.js";import{n as $l}from"./ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js";import"./IconButton-34da90d2.js";function en(l){let e,i,n,a;return{c(){e=Ve("svg"),i=Ve("path"),n=Ve("circle"),a=Ve("circle"),b(i,"d","M9 18V5l12-2v13"),b(n,"cx","6"),b(n,"cy","18"),b(n,"r","3"),b(a,"cx","18"),b(a,"cy","16"),b(a,"r","3"),b(e,"xmlns","http://www.w3.org/2000/svg"),b(e,"width","100%"),b(e,"height","100%"),b(e,"viewBox","0 0 24 24"),b(e,"fill","none"),b(e,"stroke","currentColor"),b(e,"stroke-width","1.5"),b(e,"stroke-linecap","round"),b(e,"stroke-linejoin","round"),b(e,"class","feather feather-music")},m(t,f){E(t,e,f),U(e,i),U(e,n),U(e,a)},p:x,i:x,o:x,d(t){t&&V(e)}}}class Ue extends ve{constructor(e){super(),Ae(this,e,null,en,ye,{})}}function Oe(l,e,i){const n=l.slice();return n[27]=e[i],n[29]=i,n}function De(l){let e,i,n,a,t=(l[6]==="label"||l[7]==="label")&&Ce(l);return{c(){e=H("span"),t&&t.c(),b(e,"class","pip first"),b(e,"style",i=l[14]+": 0%;"),S(e,"selected",l[17](l[0])),S(e,"in-range",l[16](l[0]))},m(f,u){E(f,e,u),t&&t.m(e,null),n||(a=[B(e,"click",function(){se(l[20](l[0]))&&l[20](l[0]).apply(this,arguments)}),B(e,"touchend",Ee(function(){se(l[20](l[0]))&&l[20](l[0]).apply(this,arguments)}))],n=!0)},p(f,u){l=f,l[6]==="label"||l[7]==="label"?t?t.p(l,u):(t=Ce(l),t.c(),t.m(e,null)):t&&(t.d(1),t=null),u&16384&&i!==(i=l[14]+": 0%;")&&b(e,"style",i),u&131073&&S(e,"selected",l[17](l[0])),u&65537&&S(e,"in-range",l[16](l[0]))},d(f){f&&V(e),t&&t.d(),n=!1,ge(a)}}}function Ce(l){let e,i=l[12](l[0],0,0)+"",n,a=l[10]&&je(l),t=l[11]&&Ke(l);return{c(){e=H("span"),a&&a.c(),n=K(i),t&&t.c(),b(e,"class","pipVal")},m(f,u){E(f,e,u),a&&a.m(e,null),U(e,n),t&&t.m(e,null)},p(f,u){f[10]?a?a.p(f,u):(a=je(f),a.c(),a.m(e,n)):a&&(a.d(1),a=null),u&4097&&i!==(i=f[12](f[0],0,0)+"")&&Z(n,i),f[11]?t?t.p(f,u):(t=Ke(f),t.c(),t.m(e,null)):t&&(t.d(1),t=null)},d(f){f&&V(e),a&&a.d(),t&&t.d()}}}function je(l){let e,i;return{c(){e=H("span"),i=K(l[10]),b(e,"class","pipVal-prefix")},m(n,a){E(n,e,a),U(e,i)},p(n,a){a&1024&&Z(i,n[10])},d(n){n&&V(e)}}}function Ke(l){let e,i;return{c(){e=H("span"),i=K(l[11]),b(e,"class","pipVal-suffix")},m(n,a){E(n,e,a),U(e,i)},p(n,a){a&2048&&Z(i,n[11])},d(n){n&&V(e)}}}function Qe(l){let e,i=Pe(Array(l[19]+1)),n=[];for(let a=0;ap}=e,{focus:I=void 0}=e,{orientationStart:q=void 0}=e,{percentOf:le=void 0}=e,{moveHandle:ee=void 0}=e;function te(p){ee(void 0,p)}return l.$$set=p=>{"range"in p&&i(21,s=p.range),"min"in p&&i(0,h=p.min),"max"in p&&i(1,d=p.max),"step"in p&&i(22,c=p.step),"values"in p&&i(23,o=p.values),"vertical"in p&&i(2,_=p.vertical),"reversed"in p&&i(3,m=p.reversed),"hoverable"in p&&i(4,A=p.hoverable),"disabled"in p&&i(5,y=p.disabled),"pipstep"in p&&i(24,k=p.pipstep),"all"in p&&i(6,N=p.all),"first"in p&&i(7,Q=p.first),"last"in p&&i(8,F=p.last),"rest"in p&&i(9,Y=p.rest),"prefix"in p&&i(10,L=p.prefix),"suffix"in p&&i(11,J=p.suffix),"formatter"in p&&i(12,$=p.formatter),"focus"in p&&i(13,I=p.focus),"orientationStart"in p&&i(14,q=p.orientationStart),"percentOf"in p&&i(15,le=p.percentOf),"moveHandle"in p&&i(25,ee=p.moveHandle)},l.$$.update=()=>{l.$$.dirty&20971527&&i(26,n=k||((d-h)/c>=(_?50:100)?(d-h)/(_?10:20):1)),l.$$.dirty&71303171&&i(19,a=parseInt((d-h)/(c*n),10)),l.$$.dirty&71303169&&i(18,t=function(p){return h+p*c*n}),l.$$.dirty&8388608&&i(17,f=function(p){return o.some(ne=>ne===p)}),l.$$.dirty&10485760&&i(16,u=function(p){if(s==="min")return o[0]>p;if(s==="max")return o[0]p})},[h,d,_,m,A,y,N,Q,F,Y,L,J,$,I,q,le,u,f,t,a,te,s,c,o,k,ee,n]}class an extends ve{constructor(e){super(),Ae(this,e,nn,ln,ye,{range:21,min:0,max:1,step:22,values:23,vertical:2,reversed:3,hoverable:4,disabled:5,pipstep:24,all:6,first:7,last:8,rest:9,prefix:10,suffix:11,formatter:12,focus:13,orientationStart:14,percentOf:15,moveHandle:25})}}function el(l,e,i){const n=l.slice();return n[63]=e[i],n[65]=i,n}function ll(l){let e,i=l[21](l[63],l[65],l[23](l[63]))+"",n,a=l[18]&&nl(l),t=l[19]&&il(l);return{c(){e=H("span"),a&&a.c(),n=K(i),t&&t.c(),b(e,"class","rangeFloat")},m(f,u){E(f,e,u),a&&a.m(e,null),U(e,n),t&&t.m(e,null)},p(f,u){f[18]?a?a.p(f,u):(a=nl(f),a.c(),a.m(e,n)):a&&(a.d(1),a=null),u[0]&10485761&&i!==(i=f[21](f[63],f[65],f[23](f[63]))+"")&&Z(n,i),f[19]?t?t.p(f,u):(t=il(f),t.c(),t.m(e,null)):t&&(t.d(1),t=null)},d(f){f&&V(e),a&&a.d(),t&&t.d()}}}function nl(l){let e,i;return{c(){e=H("span"),i=K(l[18]),b(e,"class","rangeFloat-prefix")},m(n,a){E(n,e,a),U(e,i)},p(n,a){a[0]&262144&&Z(i,n[18])},d(n){n&&V(e)}}}function il(l){let e,i;return{c(){e=H("span"),i=K(l[19]),b(e,"class","rangeFloat-suffix")},m(n,a){E(n,e,a),U(e,i)},p(n,a){a[0]&524288&&Z(i,n[19])},d(n){n&&V(e)}}}function al(l){let e,i,n,a,t,f,u,s,h,d,c,o,_=l[7]&&ll(l);return{c(){e=H("span"),i=H("span"),n=G(),_&&_.c(),b(i,"class","rangeNub"),b(e,"role","slider"),b(e,"class","rangeHandle"),b(e,"data-handle",l[65]),b(e,"style",a=l[28]+": "+l[29][l[65]]+"%; z-index: "+(l[26]===l[65]?3:2)+";"),b(e,"aria-valuemin",t=l[2]===!0&&l[65]===1?l[0][0]:l[3]),b(e,"aria-valuemax",f=l[2]===!0&&l[65]===0?l[0][1]:l[4]),b(e,"aria-valuenow",u=l[63]),b(e,"aria-valuetext",s=""+(l[18]+l[21](l[63],l[65],l[23](l[63]))+l[19])),b(e,"aria-orientation",h=l[6]?"vertical":"horizontal"),b(e,"aria-disabled",l[10]),b(e,"disabled",l[10]),b(e,"tabindex",d=l[10]?-1:0),S(e,"active",l[24]&&l[26]===l[65]),S(e,"press",l[25]&&l[26]===l[65])},m(m,A){E(m,e,A),U(e,i),U(e,n),_&&_.m(e,null),c||(o=[B(e,"blur",l[33]),B(e,"focus",l[34]),B(e,"keydown",l[35])],c=!0)},p(m,A){m[7]?_?_.p(m,A):(_=ll(m),_.c(),_.m(e,null)):_&&(_.d(1),_=null),A[0]&872415232&&a!==(a=m[28]+": "+m[29][m[65]]+"%; z-index: "+(m[26]===m[65]?3:2)+";")&&b(e,"style",a),A[0]&13&&t!==(t=m[2]===!0&&m[65]===1?m[0][0]:m[3])&&b(e,"aria-valuemin",t),A[0]&21&&f!==(f=m[2]===!0&&m[65]===0?m[0][1]:m[4])&&b(e,"aria-valuemax",f),A[0]&1&&u!==(u=m[63])&&b(e,"aria-valuenow",u),A[0]&11272193&&s!==(s=""+(m[18]+m[21](m[63],m[65],m[23](m[63]))+m[19]))&&b(e,"aria-valuetext",s),A[0]&64&&h!==(h=m[6]?"vertical":"horizontal")&&b(e,"aria-orientation",h),A[0]&1024&&b(e,"aria-disabled",m[10]),A[0]&1024&&b(e,"disabled",m[10]),A[0]&1024&&d!==(d=m[10]?-1:0)&&b(e,"tabindex",d),A[0]&83886080&&S(e,"active",m[24]&&m[26]===m[65]),A[0]&100663296&&S(e,"press",m[25]&&m[26]===m[65])},d(m){m&&V(e),_&&_.d(),c=!1,ge(o)}}}function tl(l){let e,i;return{c(){e=H("span"),b(e,"class","rangeBar"),b(e,"style",i=l[28]+": "+l[31](l[29])+"%; "+l[27]+": "+l[32](l[29])+"%;")},m(n,a){E(n,e,a)},p(n,a){a[0]&939524096&&i!==(i=n[28]+": "+n[31](n[29])+"%; "+n[27]+": "+n[32](n[29])+"%;")&&b(e,"style",i)},d(n){n&&V(e)}}}function fl(l){let e,i;return e=new an({props:{values:l[0],min:l[3],max:l[4],step:l[5],range:l[2],vertical:l[6],reversed:l[8],orientationStart:l[28],hoverable:l[9],disabled:l[10],all:l[13],first:l[14],last:l[15],rest:l[16],pipstep:l[12],prefix:l[18],suffix:l[19],formatter:l[20],focus:l[24],percentOf:l[23],moveHandle:l[30]}}),{c(){D(e.$$.fragment)},m(n,a){C(e,n,a),i=!0},p(n,a){const t={};a[0]&1&&(t.values=n[0]),a[0]&8&&(t.min=n[3]),a[0]&16&&(t.max=n[4]),a[0]&32&&(t.step=n[5]),a[0]&4&&(t.range=n[2]),a[0]&64&&(t.vertical=n[6]),a[0]&256&&(t.reversed=n[8]),a[0]&268435456&&(t.orientationStart=n[28]),a[0]&512&&(t.hoverable=n[9]),a[0]&1024&&(t.disabled=n[10]),a[0]&8192&&(t.all=n[13]),a[0]&16384&&(t.first=n[14]),a[0]&32768&&(t.last=n[15]),a[0]&65536&&(t.rest=n[16]),a[0]&4096&&(t.pipstep=n[12]),a[0]&262144&&(t.prefix=n[18]),a[0]&524288&&(t.suffix=n[19]),a[0]&1048576&&(t.formatter=n[20]),a[0]&16777216&&(t.focus=n[24]),a[0]&8388608&&(t.percentOf=n[23]),e.$set(t)},i(n){i||(P(e.$$.fragment,n),i=!0)},o(n){R(e.$$.fragment,n),i=!1},d(n){j(e,n)}}}function tn(l){let e,i,n,a,t,f,u=Pe(l[0]),s=[];for(let c=0;c{d=null}),re()),(!a||o[0]&131072)&&b(e,"id",c[17]),(!a||o[0]&4)&&S(e,"range",c[2]),(!a||o[0]&1024)&&S(e,"disabled",c[10]),(!a||o[0]&512)&&S(e,"hoverable",c[9]),(!a||o[0]&64)&&S(e,"vertical",c[6]),(!a||o[0]&256)&&S(e,"reversed",c[8]),(!a||o[0]&16777216)&&S(e,"focus",c[24]),(!a||o[0]&4)&&S(e,"min",c[2]==="min"),(!a||o[0]&4)&&S(e,"max",c[2]==="max"),(!a||o[0]&2048)&&S(e,"pips",c[11]),(!a||o[0]&122880)&&S(e,"pip-labels",c[13]==="label"||c[14]==="label"||c[15]==="label"||c[16]==="label")},i(c){a||(P(d),a=!0)},o(c){R(d),a=!1},d(c){c&&V(e),_l(s,c),h&&h.d(),d&&d.d(),l[49](null),t=!1,ge(f)}}}function sl(l){if(!l)return-1;for(var e=0;l=l.previousElementSibling;)e++;return e}function Le(l){return l.type.includes("touch")?l.touches[0]:l}function fn(l,e,i){let n,a,t,f,u,s,h=x,d=()=>(h(),h=zl(be,r=>i(29,s=r)),be);l.$$.on_destroy.push(()=>h());let{slider:c}=e,{range:o=!1}=e,{pushy:_=!1}=e,{min:m=0}=e,{max:A=100}=e,{step:y=1}=e,{values:k=[(A+m)/2]}=e,{vertical:N=!1}=e,{float:Q=!1}=e,{reversed:F=!1}=e,{hoverable:Y=!0}=e,{disabled:L=!1}=e,{pips:J=!1}=e,{pipstep:$=void 0}=e,{all:I=void 0}=e,{first:q=void 0}=e,{last:le=void 0}=e,{rest:ee=void 0}=e,{id:te=void 0}=e,{prefix:p=""}=e,{suffix:ne=""}=e,{formatter:oe=(r,v,M)=>r}=e,{handleFormatter:pe=oe}=e,{precision:X=2}=e,{springValues:_e={stiffness:.15,damping:.4}}=e;const de=Te();let me=0,W=!1,g=!1,fe=!1,w=!1,z=k.length-1,ie,ce,be;function He(r){const v=c.querySelectorAll(".handle"),M=Array.prototype.includes.call(v,r),T=Array.prototype.some.call(v,ae=>ae.contains(r));return M||T}function Se(r){return o==="min"||o==="max"?r.slice(0,1):o?r.slice(0,2):r}function we(){return c.getBoundingClientRect()}function Me(r){const v=we();let M=0,T=0,ae=0;N?(M=r.clientY-v.top,T=M/v.height*100,T=F?T:100-T):(M=r.clientX-v.left,T=M/v.width*100,T=F?100-T:T),ae=(A-m)/100*T+m;let Ie;return o===!0&&k[0]===k[1]?ae>k[1]?1:0:(Ie=k.indexOf([...k].sort((Fl,Ll)=>Math.abs(ae-Fl)-Math.abs(ae-Ll))[0]),Ie)}function Fe(r){const v=we();let M=0,T=0,ae=0;N?(M=r.clientY-v.top,T=M/v.height*100,T=F?T:100-T):(M=r.clientX-v.left,T=M/v.width*100,T=F?100-T:T),ae=(A-m)/100*T+m,ke(z,ae)}function ke(r,v){return v=t(v),typeof r>"u"&&(r=z),o&&(r===0&&v>k[1]?_?i(0,k[1]=v,k):v=k[1]:r===1&&vt(r))})}function ze(){!L&&de("stop",{activeHandle:z,startValue:ie,value:k[z],values:k.map(r=>t(r))})}function Hl(){!L&&de("change",{activeHandle:z,startValue:ie,previousValue:typeof ce>"u"?ie:ce,value:k[z],values:k.map(r=>t(r))})}function Ml(r){Be[r?"unshift":"push"](()=>{c=r,i(1,c)})}return l.$$set=r=>{"slider"in r&&i(1,c=r.slider),"range"in r&&i(2,o=r.range),"pushy"in r&&i(43,_=r.pushy),"min"in r&&i(3,m=r.min),"max"in r&&i(4,A=r.max),"step"in r&&i(5,y=r.step),"values"in r&&i(0,k=r.values),"vertical"in r&&i(6,N=r.vertical),"float"in r&&i(7,Q=r.float),"reversed"in r&&i(8,F=r.reversed),"hoverable"in r&&i(9,Y=r.hoverable),"disabled"in r&&i(10,L=r.disabled),"pips"in r&&i(11,J=r.pips),"pipstep"in r&&i(12,$=r.pipstep),"all"in r&&i(13,I=r.all),"first"in r&&i(14,q=r.first),"last"in r&&i(15,le=r.last),"rest"in r&&i(16,ee=r.rest),"id"in r&&i(17,te=r.id),"prefix"in r&&i(18,p=r.prefix),"suffix"in r&&i(19,ne=r.suffix),"formatter"in r&&i(20,oe=r.formatter),"handleFormatter"in r&&i(21,pe=r.handleFormatter),"precision"in r&&i(44,X=r.precision),"springValues"in r&&i(45,_e=r.springValues)},l.$$.update=()=>{l.$$.dirty[0]&24&&i(48,a=function(r){return r<=m?m:r>=A?A:r}),l.$$.dirty[0]&56|l.$$.dirty[1]&139264&&i(47,t=function(r){if(r<=m)return m;if(r>=A)return A;let v=(r-m)%y,M=r-v;return Math.abs(v)*2>=y&&(M+=v>0?y:-y),M=a(M),parseFloat(M.toFixed(X))}),l.$$.dirty[0]&24|l.$$.dirty[1]&8192&&i(23,n=function(r){let v=(r-m)/(A-m)*100;return isNaN(v)||v<=0?0:v>=100?100:parseFloat(v.toFixed(X))}),l.$$.dirty[0]&12582937|l.$$.dirty[1]&114688&&(Array.isArray(k)||(i(0,k=[(A+m)/2]),console.error("'values' prop should be an Array (https://github.com/simeydotme/svelte-range-slider-pips#slider-props)")),i(0,k=Se(k.map(r=>t(r)))),me!==k.length?d(i(22,be=Ul(k.map(r=>n(r)),_e))):be.set(k.map(r=>n(r))),i(46,me=k.length)),l.$$.dirty[0]&320&&i(28,f=N?F?"top":"bottom":F?"right":"left"),l.$$.dirty[0]&320&&i(27,u=N?F?"bottom":"top":F?"left":"right")},[k,c,o,m,A,y,N,Q,F,Y,L,J,$,I,q,le,ee,te,p,ne,oe,pe,be,n,W,fe,z,u,f,s,ke,pl,wl,kl,vl,Al,yl,Sl,El,Vl,Pl,Rl,Tl,_,X,_e,me,t,a,Ml]}class sn extends ve{constructor(e){super(),Ae(this,e,fn,tn,ye,{slider:1,range:2,pushy:43,min:3,max:4,step:5,values:0,vertical:6,float:7,reversed:8,hoverable:9,disabled:10,pips:11,pipstep:12,all:13,first:14,last:15,rest:16,id:17,prefix:18,suffix:19,formatter:20,handleFormatter:21,precision:44,springValues:45},null,[-1,-1,-1])}}function hl(l,{crop_values:e,autoplay:i}={}){function n(){if(e===void 0)return;const t=e[0]/100*l.duration,f=e[1]/100*l.duration;l.currentTimef&&(l.currentTime=t,l.pause())}async function a(){i&&(l.pause(),await l.play())}return l.addEventListener("loadeddata",a),l.addEventListener("timeupdate",n),{destroy(){l.removeEventListener("loadeddata",a),l.removeEventListener("timeupdate",n)}}}function un(l){let e,i,n,a,t,f,u,s,h,d,c;e=new Zl({props:{editable:!0,absolute:!0}}),e.$on("clear",l[13]),e.$on("edit",l[26]);let o=l[8]==="edit"&&l[9]?.duration&&ul(l);return{c(){D(e.$$.fragment),i=G(),n=H("audio"),u=G(),o&&o.c(),s=he(),n.controls=!0,b(n,"preload","metadata"),Re(n.src,a=l[1]?.data)||b(n,"src",a),b(n,"data-testid",t=`${l[2]}-audio`),b(n,"class","svelte-1thnwz")},m(_,m){C(e,_,m),E(_,i,m),E(_,n,m),l[27](n),E(_,u,m),o&&o.m(_,m),E(_,s,m),h=!0,d||(c=[dl(f=hl.call(null,n,{autoplay:l[6],crop_values:l[10]})),B(n,"play",l[23]),B(n,"pause",l[24]),B(n,"ended",l[16])],d=!0)},p(_,m){(!h||m[0]&2&&!Re(n.src,a=_[1]?.data))&&b(n,"src",a),(!h||m[0]&4&&t!==(t=`${_[2]}-audio`))&&b(n,"data-testid",t),f&&se(f.update)&&m[0]&1088&&f.update.call(null,{autoplay:_[6],crop_values:_[10]}),_[8]==="edit"&&_[9]?.duration?o?(o.p(_,m),m[0]&768&&P(o,1)):(o=ul(_),o.c(),P(o,1),o.m(s.parentNode,s)):o&&(ue(),R(o,1,1,()=>{o=null}),re())},i(_){h||(P(e.$$.fragment,_),P(o),h=!0)},o(_){R(e.$$.fragment,_),R(o),h=!1},d(_){_&&(V(i),V(n),V(u),V(s)),j(e,_),l[27](null),o&&o.d(_),d=!1,ge(c)}}}function rn(l){let e,i,n,a;const t=[_n,on],f=[];function u(s,h){return s[4]==="microphone"?0:s[4]==="upload"?1:-1}return~(e=u(l))&&(i=f[e]=t[e](l)),{c(){i&&i.c(),n=he()},m(s,h){~e&&f[e].m(s,h),E(s,n,h),a=!0},p(s,h){let d=e;e=u(s),e===d?~e&&f[e].p(s,h):(i&&(ue(),R(f[d],1,1,()=>{f[d]=null}),re()),~e?(i=f[e],i?i.p(s,h):(i=f[e]=t[e](s),i.c()),P(i,1),i.m(n.parentNode,n)):i=null)},i(s){a||(P(i),a=!0)},o(s){R(i),a=!1},d(s){s&&V(n),~e&&f[e].d(s)}}}function ul(l){let e,i,n;function a(f){l[28](f)}let t={range:!0,min:0,max:100,step:1};return l[10]!==void 0&&(t.values=l[10]),e=new sn({props:t}),Be.push(()=>ml(e,"values",a)),e.$on("change",l[14]),{c(){D(e.$$.fragment)},m(f,u){C(e,f,u),n=!0},p(f,u){const s={};!i&&u[0]&1024&&(i=!0,s.values=f[10],cl(()=>i=!1)),e.$set(s)},i(f){n||(P(e.$$.fragment,f),n=!0)},o(f){R(e.$$.fragment,f),n=!1},d(f){j(e,f)}}}function on(l){let e,i,n;function a(f){l[25](f)}let t={filetype:"audio/aac,audio/midi,audio/mpeg,audio/ogg,audio/wav,audio/x-wav,audio/opus,audio/webm,audio/flac,audio/vnd.rn-realaudio,audio/x-ms-wma,audio/x-aiff,audio/amr,audio/*",$$slots:{default:[dn]},$$scope:{ctx:l}};return l[0]!==void 0&&(t.dragging=l[0]),e=new Gl({props:t}),Be.push(()=>ml(e,"dragging",a)),e.$on("load",l[15]),{c(){D(e.$$.fragment)},m(f,u){C(e,f,u),n=!0},p(f,u){const s={};u[0]&536870912&&(s.$$scope={dirty:u,ctx:f}),!i&&u[0]&1&&(i=!0,s.dragging=f[0],cl(()=>i=!1)),e.$set(s)},i(f){n||(P(e.$$.fragment,f),n=!0)},o(f){R(e.$$.fragment,f),n=!1},d(f){j(e,f)}}}function _n(l){let e,i,n,a;const t=[cn,mn],f=[];function u(s,h){return s[7]?0:1}return i=u(l),n=f[i]=t[i](l),{c(){e=H("div"),n.c(),b(e,"class","mic-wrap svelte-1thnwz")},m(s,h){E(s,e,h),f[i].m(e,null),a=!0},p(s,h){let d=i;i=u(s),i===d?f[i].p(s,h):(ue(),R(f[d],1,1,()=>{f[d]=null}),re(),n=f[i],n?n.p(s,h):(n=f[i]=t[i](s),n.c()),P(n,1),n.m(e,null))},i(s){a||(P(n),a=!0)},o(s){R(n),a=!1},d(s){s&&V(e),f[i].d()}}}function dn(l){let e;const i=l[22].default,n=Nl(i,l,l[29],null);return{c(){n&&n.c()},m(a,t){n&&n.m(a,t),e=!0},p(a,t){n&&n.p&&(!e||t[0]&536870912)&&Ol(n,i,a,a[29],e?Cl(i,a[29],t,null):Dl(a[29]),null)},i(a){e||(P(n,a),e=!0)},o(a){R(n,a),e=!1},d(a){n&&n.d(a)}}}function mn(l){let e,i;return e=new bl({props:{size:"sm",$$slots:{default:[bn]},$$scope:{ctx:l}}}),e.$on("click",l[11]),{c(){D(e.$$.fragment)},m(n,a){C(e,n,a),i=!0},p(n,a){const t={};a[0]&536870912&&(t.$$scope={dirty:a,ctx:n}),e.$set(t)},i(n){i||(P(e.$$.fragment,n),i=!0)},o(n){R(e.$$.fragment,n),i=!1},d(n){j(e,n)}}}function cn(l){let e,i;return e=new bl({props:{size:"sm",$$slots:{default:[gn]},$$scope:{ctx:l}}}),e.$on("click",l[12]),{c(){D(e.$$.fragment)},m(n,a){C(e,n,a),i=!0},p(n,a){const t={};a[0]&536870912&&(t.$$scope={dirty:a,ctx:n}),e.$set(t)},i(n){i||(P(e.$$.fragment,n),i=!0)},o(n){R(e.$$.fragment,n),i=!1},d(n){j(e,n)}}}function bn(l){let e,i;return{c(){e=H("span"),e.innerHTML='',i=K(` - Record from microphone`),b(e,"class","record-icon svelte-1thnwz")},m(n,a){E(n,e,a),E(n,i,a)},p:x,d(n){n&&(V(e),V(i))}}}function gn(l){let e,i;return{c(){e=H("span"),e.innerHTML=' ',i=K(` - Stop recording`),b(e,"class","record-icon svelte-1thnwz")},m(n,a){E(n,e,a),E(n,i,a)},p:x,d(n){n&&(V(e),V(i))}}}function hn(l){let e,i,n,a,t,f;e=new gl({props:{show_label:l[3],Icon:Ue,float:l[4]==="upload"&&l[1]===null,label:l[2]||"Audio"}});const u=[rn,un],s=[];function h(d,c){return d[1]===null||d[5]?0:1}return n=h(l),a=s[n]=u[n](l),{c(){D(e.$$.fragment),i=G(),a.c(),t=he()},m(d,c){C(e,d,c),E(d,i,c),s[n].m(d,c),E(d,t,c),f=!0},p(d,c){const o={};c[0]&8&&(o.show_label=d[3]),c[0]&18&&(o.float=d[4]==="upload"&&d[1]===null),c[0]&4&&(o.label=d[2]||"Audio"),e.$set(o);let _=n;n=h(d),n===_?s[n].p(d,c):(ue(),R(s[_],1,1,()=>{s[_]=null}),re(),a=s[n],a?a.p(d,c):(a=s[n]=u[n](d),a.c()),P(a,1),a.m(t.parentNode,t))},i(d){f||(P(e.$$.fragment,d),P(a),f=!0)},o(d){R(e.$$.fragment,d),R(a),f=!1},d(d){d&&(V(i),V(t)),j(e,d),s[n].d(d)}}}const pn=500,rl=44;function wn(l){return new Promise((e,i)=>{let n=new FileReader;n.onerror=i,n.onload=()=>e(n.result),n.readAsDataURL(l)})}function kn(l,e,i){let{$$slots:n={},$$scope:a}=e,{value:t=null}=e,{label:f}=e,{show_label:u=!0}=e,{name:s=""}=e,{source:h}=e,{pending:d=!1}=e,{streaming:c=!1}=e,{autoplay:o=!1}=e,_=!1,m,A="",y,k=[],N=!1,Q,F=!1,Y=[0,100],L=[],J;function $(){J=[Ne(()=>import("./module-26a44bde.js"),["assets/module-26a44bde.js","assets/module-a3cf0cc4.js","assets/index-f877dfd5.js","assets/index-63038c0b.css"]),Ne(()=>import("./module-a5a0afa0.js"),["assets/module-a5a0afa0.js","assets/module-a3cf0cc4.js"])]}c&&$();const I=Te(),q=async(w,z)=>{let ie=new Blob(w,{type:"audio/wav"});i(1,t={data:await wn(ie),name:"audio.wav"}),I(z,t)};async function le(){let w;try{w=await navigator.mediaDevices.getUserMedia({audio:!0})}catch(z){if(z instanceof DOMException&&z.name=="NotAllowedError"){I("error","Please allow access to the microphone for recording.");return}throw z}if(w!=null){if(c){const[{MediaRecorder:z,register:ie},{connect:ce}]=await Promise.all(J);await ie(await ce()),m=new z(w,{mimeType:"audio/wav"});async function be(He){let Se=await He.data.arrayBuffer(),we=new Uint8Array(Se);if(y||(i(19,y=new Uint8Array(Se.slice(0,rl))),we=new Uint8Array(Se.slice(rl))),d)k.push(we);else{let Me=[y].concat(k,[we]);q(Me,"stream"),i(20,k=[])}}m.addEventListener("dataavailable",be)}else m=new MediaRecorder(w),m.addEventListener("dataavailable",z=>{L.push(z.data)}),m.addEventListener("stop",async()=>{i(7,_=!1),await q(L,"change"),await q(L,"stop_recording"),L=[]});F=!0}}async function ee(){i(7,_=!0),I("start_recording"),F||await le(),i(19,y=void 0),c?m.start(pn):m.start()}Il(()=>{m&&m.state!=="inactive"&&m.stop()});function te(){m.stop(),c&&(i(7,_=!1),d&&i(21,N=!0))}function p(){I("change",null),I("clear"),i(8,A=""),i(1,t=null)}function ne({detail:{values:w}}){t&&(I("change",{data:t.data,name:s,crop_min:w[0],crop_max:w[1]}),I("edit"))}function oe({detail:w}){i(1,t=w),I("change",{data:w.data,name:w.name}),I("upload",w)}function pe(){I("stop"),I("end")}let{dragging:X=!1}=e;function _e(w){O.call(this,l,w)}function de(w){O.call(this,l,w)}function me(w){X=w,i(0,X)}const W=()=>i(8,A="edit");function g(w){Be[w?"unshift":"push"](()=>{Q=w,i(9,Q)})}function fe(w){Y=w,i(10,Y)}return l.$$set=w=>{"value"in w&&i(1,t=w.value),"label"in w&&i(2,f=w.label),"show_label"in w&&i(3,u=w.show_label),"name"in w&&i(17,s=w.name),"source"in w&&i(4,h=w.source),"pending"in w&&i(18,d=w.pending),"streaming"in w&&i(5,c=w.streaming),"autoplay"in w&&i(6,o=w.autoplay),"dragging"in w&&i(0,X=w.dragging),"$$scope"in w&&i(29,a=w.$$scope)},l.$$.update=()=>{if(l.$$.dirty[0]&3932160&&N&&d===!1&&(i(21,N=!1),y&&k)){let w=[y].concat(k);i(20,k=[]),q(w,"stream")}l.$$.dirty[0]&1&&I("drag",X)},[X,t,f,u,h,c,o,_,A,Q,Y,ee,te,p,ne,oe,pe,s,d,y,k,N,n,_e,de,me,W,g,fe,a]}class vn extends ve{constructor(e){super(),Ae(this,e,kn,hn,ye,{value:1,label:2,show_label:3,name:17,source:4,pending:18,streaming:5,autoplay:6,dragging:0},null,[-1,-1])}}function ol(l){let e,i,n;return i=new Wl({props:{formatter:l[9],value:l[0]}}),i.$on("error",l[10]),i.$on("share",l[11]),{c(){e=H("div"),D(i.$$.fragment),b(e,"class","icon-button svelte-1yfus5a")},m(a,t){E(a,e,t),C(i,e,null),n=!0},p(a,t){const f={};t&1&&(f.value=a[0]),i.$set(f)},i(a){n||(P(i.$$.fragment,a),n=!0)},o(a){R(i.$$.fragment,a),n=!1},d(a){a&&V(e),j(i)}}}function An(l){let e,i,n,a,t,f;return{c(){e=H("audio"),e.controls=!0,b(e,"preload","metadata"),Re(e.src,i=l[0]?.data)||b(e,"src",i),b(e,"data-testid",n=`${l[1]}-audio`),b(e,"class","svelte-1yfus5a")},m(u,s){E(u,e,s),t||(f=[dl(a=hl.call(null,e,{autoplay:l[3]})),B(e,"play",l[7]),B(e,"pause",l[8]),B(e,"ended",l[5])],t=!0)},p(u,s){s&1&&!Re(e.src,i=u[0]?.data)&&b(e,"src",i),s&2&&n!==(n=`${u[1]}-audio`)&&b(e,"data-testid",n),a&&se(a.update)&&s&8&&a.update.call(null,{autoplay:u[3]})},i:x,o:x,d(u){u&&V(e),t=!1,ge(f)}}}function yn(l){let e,i;return e=new Jl({props:{size:"small",$$slots:{default:[Sn]},$$scope:{ctx:l}}}),{c(){D(e.$$.fragment)},m(n,a){C(e,n,a),i=!0},p(n,a){const t={};a&8192&&(t.$$scope={dirty:a,ctx:n}),e.$set(t)},i(n){i||(P(e.$$.fragment,n),i=!0)},o(n){R(e.$$.fragment,n),i=!1},d(n){j(e,n)}}}function Sn(l){let e,i;return e=new Ue({}),{c(){D(e.$$.fragment)},m(n,a){C(e,n,a),i=!0},i(n){i||(P(e.$$.fragment,n),i=!0)},o(n){R(e.$$.fragment,n),i=!1},d(n){j(e,n)}}}function En(l){let e,i,n,a,t,f,u;e=new gl({props:{show_label:l[2],Icon:Ue,float:!1,label:l[1]||"Audio"}});let s=l[4]&&l[0]!==null&&ol(l);const h=[yn,An],d=[];function c(o,_){return o[0]===null?0:1}return a=c(l),t=d[a]=h[a](l),{c(){D(e.$$.fragment),i=G(),s&&s.c(),n=G(),t.c(),f=he()},m(o,_){C(e,o,_),E(o,i,_),s&&s.m(o,_),E(o,n,_),d[a].m(o,_),E(o,f,_),u=!0},p(o,[_]){const m={};_&4&&(m.show_label=o[2]),_&2&&(m.label=o[1]||"Audio"),e.$set(m),o[4]&&o[0]!==null?s?(s.p(o,_),_&17&&P(s,1)):(s=ol(o),s.c(),P(s,1),s.m(n.parentNode,n)):s&&(ue(),R(s,1,1,()=>{s=null}),re());let A=a;a=c(o),a===A?d[a].p(o,_):(ue(),R(d[A],1,1,()=>{d[A]=null}),re(),t=d[a],t?t.p(o,_):(t=d[a]=h[a](o),t.c()),P(t,1),t.m(f.parentNode,f))},i(o){u||(P(e.$$.fragment,o),P(s),P(t),u=!0)},o(o){R(e.$$.fragment,o),R(s),R(t),u=!1},d(o){o&&(V(i),V(n),V(f)),j(e,o),s&&s.d(o),d[a].d(o)}}}function Vn(l,e,i){let{value:n=null}=e,{label:a}=e,{name:t}=e,{show_label:f=!0}=e,{autoplay:u}=e,{show_share_button:s=!1}=e;const h=Te();function d(){h("stop"),h("end")}function c(y){O.call(this,l,y)}function o(y){O.call(this,l,y)}const _=async y=>y?``:"";function m(y){O.call(this,l,y)}function A(y){O.call(this,l,y)}return l.$$set=y=>{"value"in y&&i(0,n=y.value),"label"in y&&i(1,a=y.label),"name"in y&&i(6,t=y.name),"show_label"in y&&i(2,f=y.show_label),"autoplay"in y&&i(3,u=y.autoplay),"show_share_button"in y&&i(4,s=y.show_share_button)},l.$$.update=()=>{l.$$.dirty&65&&n&&h("change",{name:t,data:n?.data})},[n,a,f,u,s,d,t,c,o,_,m,A]}class Pn extends ve{constructor(e){super(),Ae(this,e,Vn,En,ye,{value:0,label:1,name:6,show_label:2,autoplay:3,show_share_button:4})}}function Rn(l){let e,i;return e=new Pn({props:{autoplay:l[15],show_label:l[9],show_share_button:l[16],value:l[17],name:l[17]?.name||"audio_file",label:l[8]}}),e.$on("share",l[34]),e.$on("error",l[35]),{c(){D(e.$$.fragment)},m(n,a){C(e,n,a),i=!0},p(n,a){const t={};a[0]&32768&&(t.autoplay=n[15]),a[0]&512&&(t.show_label=n[9]),a[0]&65536&&(t.show_share_button=n[16]),a[0]&131072&&(t.value=n[17]),a[0]&131072&&(t.name=n[17]?.name||"audio_file"),a[0]&256&&(t.label=n[8]),e.$set(t)},i(n){i||(P(e.$$.fragment,n),i=!0)},o(n){R(e.$$.fragment,n),i=!1},d(n){j(e,n)}}}function Tn(l){let e,i;return e=new vn({props:{label:l[8],show_label:l[9],value:l[17],name:l[6],source:l[7],pending:l[10],streaming:l[11],autoplay:l[15],$$slots:{default:[Bn]},$$scope:{ctx:l}}}),e.$on("change",l[22]),e.$on("stream",l[23]),e.$on("drag",l[24]),e.$on("edit",l[25]),e.$on("play",l[26]),e.$on("pause",l[27]),e.$on("stop",l[28]),e.$on("end",l[29]),e.$on("start_recording",l[30]),e.$on("stop_recording",l[31]),e.$on("upload",l[32]),e.$on("error",l[33]),{c(){D(e.$$.fragment)},m(n,a){C(e,n,a),i=!0},p(n,a){const t={};a[0]&256&&(t.label=n[8]),a[0]&512&&(t.show_label=n[9]),a[0]&131072&&(t.value=n[17]),a[0]&64&&(t.name=n[6]),a[0]&128&&(t.source=n[7]),a[0]&1024&&(t.pending=n[10]),a[0]&2048&&(t.streaming=n[11]),a[0]&32768&&(t.autoplay=n[15]),a[1]&32&&(t.$$scope={dirty:a,ctx:n}),e.$set(t)},i(n){i||(P(e.$$.fragment,n),i=!0)},o(n){R(e.$$.fragment,n),i=!1},d(n){j(e,n)}}}function Bn(l){let e,i;return e=new ql({props:{type:"audio"}}),{c(){D(e.$$.fragment)},m(n,a){C(e,n,a),i=!0},p:x,i(n){i||(P(e.$$.fragment,n),i=!0)},o(n){R(e.$$.fragment,n),i=!1},d(n){j(e,n)}}}function Hn(l){let e,i,n,a,t,f;const u=[l[1]];let s={};for(let o=0;o{d[A]=null}),re(),a=d[n],a?a.p(o,_):(a=d[n]=h[n](o),a.c()),P(a,1),a.m(t.parentNode,t))},i(o){f||(P(e.$$.fragment,o),P(a),f=!0)},o(o){R(e.$$.fragment,o),R(a),f=!1},d(o){o&&(V(i),V(t)),j(e,o),d[n].d(o)}}}function Mn(l){let e,i;return e=new Xl({props:{variant:l[5]==="dynamic"&&l[0]===null&&l[7]==="upload"?"dashed":"solid",border_mode:l[18]?"focus":"base",padding:!1,elem_id:l[2],elem_classes:l[3],visible:l[4],container:l[12],scale:l[13],min_width:l[14],$$slots:{default:[Hn]},$$scope:{ctx:l}}}),{c(){D(e.$$.fragment)},m(n,a){C(e,n,a),i=!0},p(n,a){const t={};a[0]&161&&(t.variant=n[5]==="dynamic"&&n[0]===null&&n[7]==="upload"?"dashed":"solid"),a[0]&262144&&(t.border_mode=n[18]?"focus":"base"),a[0]&4&&(t.elem_id=n[2]),a[0]&8&&(t.elem_classes=n[3]),a[0]&16&&(t.visible=n[4]),a[0]&4096&&(t.container=n[12]),a[0]&8192&&(t.scale=n[13]),a[0]&16384&&(t.min_width=n[14]),a[0]&495587|a[1]&32&&(t.$$scope={dirty:a,ctx:n}),e.$set(t)},i(n){i||(P(e.$$.fragment,n),i=!0)},o(n){R(e.$$.fragment,n),i=!1},d(n){j(e,n)}}}function Fn(l,e,i){const n=Te();let{elem_id:a=""}=e,{elem_classes:t=[]}=e,{visible:f=!0}=e,{mode:u}=e,{value:s=null}=e,{name:h}=e,{source:d}=e,{label:c}=e,{root:o}=e,{show_label:_}=e,{pending:m}=e,{streaming:A}=e,{root_url:y}=e,{container:k=!1}=e,{scale:N=null}=e,{min_width:Q=void 0}=e,{loading_status:F}=e,{autoplay:Y=!1}=e,{show_share_button:L=!1}=e,J,$;const I=({detail:g})=>{i(0,s=g),n("change",s)},q=({detail:g})=>{i(0,s=g),n("stream",s)},le=({detail:g})=>i(18,$=g);function ee(g){O.call(this,l,g)}function te(g){O.call(this,l,g)}function p(g){O.call(this,l,g)}function ne(g){O.call(this,l,g)}function oe(g){O.call(this,l,g)}function pe(g){O.call(this,l,g)}function X(g){O.call(this,l,g)}function _e(g){O.call(this,l,g)}const de=({detail:g})=>{i(1,F=F||{}),i(1,F.status="error",F),n("error",g)};function me(g){O.call(this,l,g)}function W(g){O.call(this,l,g)}return l.$$set=g=>{"elem_id"in g&&i(2,a=g.elem_id),"elem_classes"in g&&i(3,t=g.elem_classes),"visible"in g&&i(4,f=g.visible),"mode"in g&&i(5,u=g.mode),"value"in g&&i(0,s=g.value),"name"in g&&i(6,h=g.name),"source"in g&&i(7,d=g.source),"label"in g&&i(8,c=g.label),"root"in g&&i(20,o=g.root),"show_label"in g&&i(9,_=g.show_label),"pending"in g&&i(10,m=g.pending),"streaming"in g&&i(11,A=g.streaming),"root_url"in g&&i(21,y=g.root_url),"container"in g&&i(12,k=g.container),"scale"in g&&i(13,N=g.scale),"min_width"in g&&i(14,Q=g.min_width),"loading_status"in g&&i(1,F=g.loading_status),"autoplay"in g&&i(15,Y=g.autoplay),"show_share_button"in g&&i(16,L=g.show_share_button)},l.$$.update=()=>{l.$$.dirty[0]&3145729&&i(17,J=$l(s,o,y))},[s,F,a,t,f,u,h,d,c,_,m,A,k,N,Q,Y,L,J,$,n,o,y,I,q,le,ee,te,p,ne,oe,pe,X,_e,de,me,W]}class Ln extends ve{constructor(e){super(),Ae(this,e,Fn,Mn,ye,{elem_id:2,elem_classes:3,visible:4,mode:5,value:0,name:6,source:7,label:8,root:20,show_label:9,pending:10,streaming:11,root_url:21,container:12,scale:13,min_width:14,loading_status:1,autoplay:15,show_share_button:16},null,[-1,-1])}}const qn=Ln,Xn=["static","dynamic"],Gn=()=>({type:{input_payload:"{ name: string; data: string }",response_object:"{ name: string; data: string, is_file: boolean }"},description:{input_payload:"audio data as object with filename and base64 string",response_object:"object that includes path to audio file. The URL: {ROOT}file={name} contains the data"},example_data:{name:"audio.wav",data:"data:audio/wav;base64,UklGRiQAAABXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YQAAAAA="}});export{qn as Component,Gn as document,Xn as modes}; -//# sourceMappingURL=index-a2b4a4fc.js.map diff --git a/spaces/cihyFjudo/fairness-paper-search/Gauraiya 2 Full Movie Download PATCHED 1080p.md b/spaces/cihyFjudo/fairness-paper-search/Gauraiya 2 Full Movie Download PATCHED 1080p.md deleted file mode 100644 index a6b25f62ef535c34b83914dd1aca8748d47c367c..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Gauraiya 2 Full Movie Download PATCHED 1080p.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Gauraiya 2 Full Movie Download 1080p


    Download ––– https://tinurli.com/2uwjF6



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Ssd Life Pro Registration Key How to Download and Activate the Most Advanced SSD Software.md b/spaces/cihyFjudo/fairness-paper-search/Ssd Life Pro Registration Key How to Download and Activate the Most Advanced SSD Software.md deleted file mode 100644 index 80c9a44ac3d42cc7099b6fb4d80d8002d3e428c2..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Ssd Life Pro Registration Key How to Download and Activate the Most Advanced SSD Software.md +++ /dev/null @@ -1,22 +0,0 @@ - -

    How long does your SSD last? Are you afraid of SSD failure? Read this post to get the answers. It will also tell you how to take care of your SSD to expand its life and how to protect your data with MiniTool Partition Wizard when facing SSD failure.

    -

    SSDs are now widely available in some of the mainstream PCs. Many of you may also have upgraded your hard drive from HDD to SSD or might plan to do that. Apparently, the performance of SSDs is better than that of HDDs. However, some people may be worried about SSD lifespan. How long do SSDs last? Now, read on to learn how to calculate SSD life.

    -

    Ssd Life Pro Registration Key


    DOWNLOAD ––– https://tinurli.com/2uwj3j



    -

    TBW (Terabytes Written) indicates how much data a drive can write over its lifespan. For example, an SSD with 500 TBW means that the SSD can write 500 TB before it needs to be replaced.

    -

    The write amplification will shorten the SSD life a lot. Of course, in order to mitigate this problem, some new technologies are applied. For example: Wear Leveling and bad block management.

    -

    The wear leveling algorithm controls uneven "wear" of flash media sectors by distributing writes to multiple sectors. Thus, all sectors on a flash media can reach their endurance limit almost simultaneously, extending the life of the flash media

    -

    Many of you may also be interested in SSD vs HDD lifespan. How long do hard drives last? To answer this question, you should know about HDD lifespan, too. I only make a comparison between their lifespan in this post.

    -

    Actually, the HDD lifespan is longer than the SSD lifespan because the HDD adopts an overwriting method to write new data. However, many of you may find that the HDD will fail sooner than SSDs in most cases.

    -

    This operation will not cause problems with HDDs, but it will affect the efficiency of garbage-collection (GC) because some invalid data will be treated as valid data by SSDs. Hence, the SSD life will be shortened because it does more writes in GC.

    -

    For SSDs, disk defragmentation is unnecessary because SSDs don't require seek time. What's more, this kind of data moving without any benefit can greatly damage the SSD life. Therefore, you should disable it (it is usually used in Windows versions before Windows 8).

    -

    -

    Struggling to find some? The Macrium Reflect free version performs well. Simply enter your email address and the company will send you a registration code and download link to use. From here, install the software on your Windows 11 computer and enter the registration code on the screen after it asks for a licence key, pressing Next to skip the licence key screen.

    -

    ITPro is part of Future plc, an international media group and leading digital publisher. Visit our corporate site www.futureplc.com
    © Future Publishing Limited, Quay House, The Ambury, Bath BA1 1UA. All rights reserved. England and Wales company registration number 2008885

    -

    No matter what version you have, Windows is the home of your digital life. But all that shuffling, downloading, and browsing take its toll and can clog up your system quickly. AVG TuneUp can clear out years of grime, make your browsing speedier and lighter, and keep your favorite apps updated automatically. Enjoy an optimal Windows experience with AVG TuneUp.

    -

    Alisa is a professional English editor with 4-year experience. She loves writing and focuses on sharing detailed solutions and thoughts for computer problems, data recovery & backup, digital gadgets, tech news, etc. Through her articles, users can always easily get related problems solved and find what they want. In spare time, she likes basketball, badminton, tennis, cycling, running, and singing. She is very funny and energetic in life, and always brings friends lots of laughs.

    -

    SanDisk may, at its option, either: (1) repair or replace the Product with a new reconditioned or refurbished Product of equal or greater capacity, or another equivalent product; or (2) refund the current market value of the Product at the time the warranty claim is made to SanDisk, or the value as determined by regional requirements, if SanDisk is unable to repair or replace the Product. See below for regional requirements. In the case of replacements, SanDisk may replace the Product with one that was previously used, repaired, and tested to meet SanDisk specifications. SanDisk will not be liable for indirect or consequential damage (including loss of data), or for damage caused by improper use (including use in an incompatible device or manner and use otherwise not in accordance with the instructions), or by improper installation, unprofessional repair, modification or accident. This constitutes SanDisk's entire liability which will never exceed the price you paid for it, plus the necessary costs you made for the warranty claim. SanDisk products must not be used in applications where failure could threaten injury or life, such as life support systems. SANDISK DISCLAIMS ALL EXPRESS AND IMPLIED WARRANTIES TO THE FULLEST EXTENT PERMITTED BY LAW. IF SANDISK CANNOT DISCLAIM IMPLIED WARRANTIES UNDER APPLICABLE LAW, THEN TO THE EXTENT POSSIBLE, SUCH IMPLIED WARRANTIES ARE LIMITED TO THE DURATION OF THE EXPRESS WARRANTY. THE WARRANTY DURATION ON ANY REPLACED PRODUCT WILL BE THAT PORTION OF THE WARRANTY PERIOD REMAINING ON YOUR ORIGINAL PRODUCT. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OR LIMITATION OF INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO THE ABOVE EXCLUSION OR LIMITATION MAY NOT APPLY TO YOU.

    -

    SanDisk will not be liable for indirect or consequential damage (including loss of data), or for damage caused by improper use (including use in an incompatible device or manner and use otherwise not in accordance with the instructions), or by improper installation, unprofessional repair, modification or accident. This constitutes SanDisk's entire liability, which will never exceed the price you paid for it, plus the necessary costs you made for the warranty claim. SanDisk products must not be used in applications where failure could threaten injury or life, such as life support systems. SANDISK DISCLAIMS ALL EXPRESS AND IMPLIED WARRANTIES TO THE FULLEST EXTENT PERMITTED BY LAW. IF SANDISK CANNOT DISCLAIM IMPLIED WARRANTIES UNDER APPLICABLE LAW, THEN TO THE EXTENT POSSIBLE, SUCH IMPLIED WARRANTIES ARE LIMITED TO THE DURATION OF THE EXPRESS WARRANTY. THE WARRANTY DURATION ON ANY REPLACED PRODUCT WILL BE THAT PORTION OF THE WARRANTY PERIOD REMAINING ON YOUR ORIGINAL PRODUCT. SOME STATES (OR JURISDICTIONS) DO NOT ALLOW THE EXCLUSION OR LIMITATION OF INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO THE ABOVE EXCLUSION OR LIMITATION MAY NOT APPLY TO YOU. This limited warranty gives you specific legal rights. National, state and local laws may grant you other rights that are not affected by this warranty.

    -

    1. Manufacturer warrants to Customer that this Product, excluding content and/or software (if applicable) supplied with or within the Product, will, during the applicable Warranty Period (as defined below) specified below under normal use conditions, (a) be free from material defects in material or workmanship and, (b) will materially conform to Manufacturer's published product specifications. A Product will be considered to have a material defect or to be materially defective only if such Product does not meet the stated design lifetime (up to the applicable Warranty Period) and is returned to the appropriate location within the Warranty Period and subject to applicable performance threshold information contained in the Product's datasheet (as produced or provided by Manufacturer).

    -

    7. Manufacturer products, including the Product, must not be used in applications where failure could threaten injury or life, such as aviation, automotive, nuclear, medical or life support systems (or any other form of ultra-hazardous applications), and under no circumstances shall Manufacturer have any Warranty or other obligations arising from any such Product uses. NOTWITHSTANDING ANYTHING TO THE CONTRARY, AS PRODUCTS HAVE VARIED FAILURE RATES AND A LIMITED USEFUL LIFE PERIOD WITH ENDURANCE LIMITS, THE RESPONSIBILITY FOR THE DESIGN, MANUFACTURING AND ADEQUATE TESTING OF CUSTOMER'S APPLICATIONS, SYSTEMS AND DEVICES USING PRODUCTS HEREIN LIES WITH CUSTOMER WHERE FAILURE OF THE PRODUCT COULD RESULT, DIRECTLY OR INDIRECTLY, IN DEATH, PERSONAL INJURY, OR SEVERE PROPERTY OR ENVIRONMENTAL DAMAGE, INCLUDING WITHOUT LIMITATION, AS CRITICAL COMPONENTS IN MEDICAL DEVICES, LIFE SUPPORT DEVICES, AUTOMOTIVE AND OTHER CRITICAL APPLICATIONS ("CRITICAL APPLICATIONS"). CUSTOMER IS RESPONSIBLE FOR PUTTING IN PLACE SAFETY MEASURES AND APPROPRIATE REDUNDANCIES, FAULT TOLERANT AND BACK-UP FEATURES SUFFICIENT TO PROTECT END USERS FROM ANY RISK OF DAMAGE, INJURY OR DEATH RESULTING FROM ANY FAILURE OR ANY OTHER ISSUE IN THE UTILIZATION OF THE PRODUCTS IN OR BY CUSTOMER'S APPLICATIONS, SYSTEMS OR DEVICES WHEN DEALING WITH CRITICAL APPLICATIONS AND SHALL OTHERWISE PROVIDE END USERS WITH ALERTS, USE AND MAINTENANCE INSTRUCTIONS TO AVOID SUCH RISKS.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/pens/freetypePen.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/pens/freetypePen.py deleted file mode 100644 index 870776bc7bf23230ff03d0185cb766f48180bce9..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/pens/freetypePen.py +++ /dev/null @@ -1,458 +0,0 @@ -# -*- coding: utf-8 -*- - -"""Pen to rasterize paths with FreeType.""" - -__all__ = ["FreeTypePen"] - -import os -import ctypes -import platform -import subprocess -import collections -import math - -import freetype -from freetype.raw import FT_Outline_Get_Bitmap, FT_Outline_Get_BBox, FT_Outline_Get_CBox -from freetype.ft_types import FT_Pos -from freetype.ft_structs import FT_Vector, FT_BBox, FT_Bitmap, FT_Outline -from freetype.ft_enums import ( - FT_OUTLINE_NONE, - FT_OUTLINE_EVEN_ODD_FILL, - FT_PIXEL_MODE_GRAY, - FT_CURVE_TAG_ON, - FT_CURVE_TAG_CONIC, - FT_CURVE_TAG_CUBIC, -) -from freetype.ft_errors import FT_Exception - -from fontTools.pens.basePen import BasePen, PenError -from fontTools.misc.roundTools import otRound -from fontTools.misc.transform import Transform - -Contour = collections.namedtuple("Contour", ("points", "tags")) - - -class FreeTypePen(BasePen): - """Pen to rasterize paths with FreeType. Requires `freetype-py` module. - - Constructs ``FT_Outline`` from the paths, and renders it within a bitmap - buffer. - - For ``array()`` and ``show()``, `numpy` and `matplotlib` must be installed. - For ``image()``, `Pillow` is required. Each module is lazily loaded when the - corresponding method is called. - - Args: - glyphSet: a dictionary of drawable glyph objects keyed by name - used to resolve component references in composite glyphs. - - :Examples: - If `numpy` and `matplotlib` is available, the following code will - show the glyph image of `fi` in a new window:: - - from fontTools.ttLib import TTFont - from fontTools.pens.freetypePen import FreeTypePen - from fontTools.misc.transform import Offset - pen = FreeTypePen(None) - font = TTFont('SourceSansPro-Regular.otf') - glyph = font.getGlyphSet()['fi'] - glyph.draw(pen) - width, ascender, descender = glyph.width, font['OS/2'].usWinAscent, -font['OS/2'].usWinDescent - height = ascender - descender - pen.show(width=width, height=height, transform=Offset(0, -descender)) - - Combining with `uharfbuzz`, you can typeset a chunk of glyphs in a pen:: - - import uharfbuzz as hb - from fontTools.pens.freetypePen import FreeTypePen - from fontTools.pens.transformPen import TransformPen - from fontTools.misc.transform import Offset - - en1, en2, ar, ja = 'Typesetting', 'Jeff', 'صف الحروف', 'たいぷせっと' - for text, font_path, direction, typo_ascender, typo_descender, vhea_ascender, vhea_descender, contain, features in ( - (en1, 'NotoSans-Regular.ttf', 'ltr', 2189, -600, None, None, False, {"kern": True, "liga": True}), - (en2, 'NotoSans-Regular.ttf', 'ltr', 2189, -600, None, None, True, {"kern": True, "liga": True}), - (ar, 'NotoSansArabic-Regular.ttf', 'rtl', 1374, -738, None, None, False, {"kern": True, "liga": True}), - (ja, 'NotoSansJP-Regular.otf', 'ltr', 880, -120, 500, -500, False, {"palt": True, "kern": True}), - (ja, 'NotoSansJP-Regular.otf', 'ttb', 880, -120, 500, -500, False, {"vert": True, "vpal": True, "vkrn": True}) - ): - blob = hb.Blob.from_file_path(font_path) - face = hb.Face(blob) - font = hb.Font(face) - buf = hb.Buffer() - buf.direction = direction - buf.add_str(text) - buf.guess_segment_properties() - hb.shape(font, buf, features) - - x, y = 0, 0 - pen = FreeTypePen(None) - for info, pos in zip(buf.glyph_infos, buf.glyph_positions): - gid = info.codepoint - transformed = TransformPen(pen, Offset(x + pos.x_offset, y + pos.y_offset)) - font.draw_glyph_with_pen(gid, transformed) - x += pos.x_advance - y += pos.y_advance - - offset, width, height = None, None, None - if direction in ('ltr', 'rtl'): - offset = (0, -typo_descender) - width = x - height = typo_ascender - typo_descender - else: - offset = (-vhea_descender, -y) - width = vhea_ascender - vhea_descender - height = -y - pen.show(width=width, height=height, transform=Offset(*offset), contain=contain) - - For Jupyter Notebook, the rendered image will be displayed in a cell if - you replace ``show()`` with ``image()`` in the examples. - """ - - def __init__(self, glyphSet): - BasePen.__init__(self, glyphSet) - self.contours = [] - - def outline(self, transform=None, evenOdd=False): - """Converts the current contours to ``FT_Outline``. - - Args: - transform: An optional 6-tuple containing an affine transformation, - or a ``Transform`` object from the ``fontTools.misc.transform`` - module. - evenOdd: Pass ``True`` for even-odd fill instead of non-zero. - """ - transform = transform or Transform() - if not hasattr(transform, "transformPoint"): - transform = Transform(*transform) - n_contours = len(self.contours) - n_points = sum((len(contour.points) for contour in self.contours)) - points = [] - for contour in self.contours: - for point in contour.points: - point = transform.transformPoint(point) - points.append( - FT_Vector( - FT_Pos(otRound(point[0] * 64)), FT_Pos(otRound(point[1] * 64)) - ) - ) - tags = [] - for contour in self.contours: - for tag in contour.tags: - tags.append(tag) - contours = [] - contours_sum = 0 - for contour in self.contours: - contours_sum += len(contour.points) - contours.append(contours_sum - 1) - flags = FT_OUTLINE_EVEN_ODD_FILL if evenOdd else FT_OUTLINE_NONE - return FT_Outline( - (ctypes.c_short)(n_contours), - (ctypes.c_short)(n_points), - (FT_Vector * n_points)(*points), - (ctypes.c_ubyte * n_points)(*tags), - (ctypes.c_short * n_contours)(*contours), - (ctypes.c_int)(flags), - ) - - def buffer( - self, width=None, height=None, transform=None, contain=False, evenOdd=False - ): - """Renders the current contours within a bitmap buffer. - - Args: - width: Image width of the bitmap in pixels. If omitted, it - automatically fits to the bounding box of the contours. - height: Image height of the bitmap in pixels. If omitted, it - automatically fits to the bounding box of the contours. - transform: An optional 6-tuple containing an affine transformation, - or a ``Transform`` object from the ``fontTools.misc.transform`` - module. The bitmap size is not affected by this matrix. - contain: If ``True``, the image size will be automatically expanded - so that it fits to the bounding box of the paths. Useful for - rendering glyphs with negative sidebearings without clipping. - evenOdd: Pass ``True`` for even-odd fill instead of non-zero. - - Returns: - A tuple of ``(buffer, size)``, where ``buffer`` is a ``bytes`` - object of the resulted bitmap and ``size`` is a 2-tuple of its - dimension. - - :Notes: - The image size should always be given explicitly if you need to get - a proper glyph image. When ``width`` and ``height`` are omitted, it - forcifully fits to the bounding box and the side bearings get - cropped. If you pass ``0`` to both ``width`` and ``height`` and set - ``contain`` to ``True``, it expands to the bounding box while - maintaining the origin of the contours, meaning that LSB will be - maintained but RSB won’t. The difference between the two becomes - more obvious when rotate or skew transformation is applied. - - :Example: - .. code-block:: - - >> pen = FreeTypePen(None) - >> glyph.draw(pen) - >> buf, size = pen.buffer(width=500, height=1000) - >> type(buf), len(buf), size - (, 500000, (500, 1000)) - - """ - transform = transform or Transform() - if not hasattr(transform, "transformPoint"): - transform = Transform(*transform) - contain_x, contain_y = contain or width is None, contain or height is None - if contain_x or contain_y: - dx, dy = transform.dx, transform.dy - bbox = self.bbox - p1, p2, p3, p4 = ( - transform.transformPoint((bbox[0], bbox[1])), - transform.transformPoint((bbox[2], bbox[1])), - transform.transformPoint((bbox[0], bbox[3])), - transform.transformPoint((bbox[2], bbox[3])), - ) - px, py = (p1[0], p2[0], p3[0], p4[0]), (p1[1], p2[1], p3[1], p4[1]) - if contain_x: - if width is None: - dx = dx - min(*px) - width = max(*px) - min(*px) - else: - dx = dx - min(min(*px), 0.0) - width = max(width, max(*px) - min(min(*px), 0.0)) - if contain_y: - if height is None: - dy = dy - min(*py) - height = max(*py) - min(*py) - else: - dy = dy - min(min(*py), 0.0) - height = max(height, max(*py) - min(min(*py), 0.0)) - transform = Transform(*transform[:4], dx, dy) - width, height = math.ceil(width), math.ceil(height) - buf = ctypes.create_string_buffer(width * height) - bitmap = FT_Bitmap( - (ctypes.c_int)(height), - (ctypes.c_int)(width), - (ctypes.c_int)(width), - (ctypes.POINTER(ctypes.c_ubyte))(buf), - (ctypes.c_short)(256), - (ctypes.c_ubyte)(FT_PIXEL_MODE_GRAY), - (ctypes.c_char)(0), - (ctypes.c_void_p)(None), - ) - outline = self.outline(transform=transform, evenOdd=evenOdd) - err = FT_Outline_Get_Bitmap( - freetype.get_handle(), ctypes.byref(outline), ctypes.byref(bitmap) - ) - if err != 0: - raise FT_Exception(err) - return buf.raw, (width, height) - - def array( - self, width=None, height=None, transform=None, contain=False, evenOdd=False - ): - """Returns the rendered contours as a numpy array. Requires `numpy`. - - Args: - width: Image width of the bitmap in pixels. If omitted, it - automatically fits to the bounding box of the contours. - height: Image height of the bitmap in pixels. If omitted, it - automatically fits to the bounding box of the contours. - transform: An optional 6-tuple containing an affine transformation, - or a ``Transform`` object from the ``fontTools.misc.transform`` - module. The bitmap size is not affected by this matrix. - contain: If ``True``, the image size will be automatically expanded - so that it fits to the bounding box of the paths. Useful for - rendering glyphs with negative sidebearings without clipping. - evenOdd: Pass ``True`` for even-odd fill instead of non-zero. - - Returns: - A ``numpy.ndarray`` object with a shape of ``(height, width)``. - Each element takes a value in the range of ``[0.0, 1.0]``. - - :Notes: - The image size should always be given explicitly if you need to get - a proper glyph image. When ``width`` and ``height`` are omitted, it - forcifully fits to the bounding box and the side bearings get - cropped. If you pass ``0`` to both ``width`` and ``height`` and set - ``contain`` to ``True``, it expands to the bounding box while - maintaining the origin of the contours, meaning that LSB will be - maintained but RSB won’t. The difference between the two becomes - more obvious when rotate or skew transformation is applied. - - :Example: - .. code-block:: - - >> pen = FreeTypePen(None) - >> glyph.draw(pen) - >> arr = pen.array(width=500, height=1000) - >> type(a), a.shape - (, (1000, 500)) - """ - import numpy as np - - buf, size = self.buffer( - width=width, - height=height, - transform=transform, - contain=contain, - evenOdd=evenOdd, - ) - return np.frombuffer(buf, "B").reshape((size[1], size[0])) / 255.0 - - def show( - self, width=None, height=None, transform=None, contain=False, evenOdd=False - ): - """Plots the rendered contours with `pyplot`. Requires `numpy` and - `matplotlib`. - - Args: - width: Image width of the bitmap in pixels. If omitted, it - automatically fits to the bounding box of the contours. - height: Image height of the bitmap in pixels. If omitted, it - automatically fits to the bounding box of the contours. - transform: An optional 6-tuple containing an affine transformation, - or a ``Transform`` object from the ``fontTools.misc.transform`` - module. The bitmap size is not affected by this matrix. - contain: If ``True``, the image size will be automatically expanded - so that it fits to the bounding box of the paths. Useful for - rendering glyphs with negative sidebearings without clipping. - evenOdd: Pass ``True`` for even-odd fill instead of non-zero. - - :Notes: - The image size should always be given explicitly if you need to get - a proper glyph image. When ``width`` and ``height`` are omitted, it - forcifully fits to the bounding box and the side bearings get - cropped. If you pass ``0`` to both ``width`` and ``height`` and set - ``contain`` to ``True``, it expands to the bounding box while - maintaining the origin of the contours, meaning that LSB will be - maintained but RSB won’t. The difference between the two becomes - more obvious when rotate or skew transformation is applied. - - :Example: - .. code-block:: - - >> pen = FreeTypePen(None) - >> glyph.draw(pen) - >> pen.show(width=500, height=1000) - """ - from matplotlib import pyplot as plt - - a = self.array( - width=width, - height=height, - transform=transform, - contain=contain, - evenOdd=evenOdd, - ) - plt.imshow(a, cmap="gray_r", vmin=0, vmax=1) - plt.show() - - def image( - self, width=None, height=None, transform=None, contain=False, evenOdd=False - ): - """Returns the rendered contours as a PIL image. Requires `Pillow`. - Can be used to display a glyph image in Jupyter Notebook. - - Args: - width: Image width of the bitmap in pixels. If omitted, it - automatically fits to the bounding box of the contours. - height: Image height of the bitmap in pixels. If omitted, it - automatically fits to the bounding box of the contours. - transform: An optional 6-tuple containing an affine transformation, - or a ``Transform`` object from the ``fontTools.misc.transform`` - module. The bitmap size is not affected by this matrix. - contain: If ``True``, the image size will be automatically expanded - so that it fits to the bounding box of the paths. Useful for - rendering glyphs with negative sidebearings without clipping. - evenOdd: Pass ``True`` for even-odd fill instead of non-zero. - - Returns: - A ``PIL.image`` object. The image is filled in black with alpha - channel obtained from the rendered bitmap. - - :Notes: - The image size should always be given explicitly if you need to get - a proper glyph image. When ``width`` and ``height`` are omitted, it - forcifully fits to the bounding box and the side bearings get - cropped. If you pass ``0`` to both ``width`` and ``height`` and set - ``contain`` to ``True``, it expands to the bounding box while - maintaining the origin of the contours, meaning that LSB will be - maintained but RSB won’t. The difference between the two becomes - more obvious when rotate or skew transformation is applied. - - :Example: - .. code-block:: - - >> pen = FreeTypePen(None) - >> glyph.draw(pen) - >> img = pen.image(width=500, height=1000) - >> type(img), img.size - (, (500, 1000)) - """ - from PIL import Image - - buf, size = self.buffer( - width=width, - height=height, - transform=transform, - contain=contain, - evenOdd=evenOdd, - ) - img = Image.new("L", size, 0) - img.putalpha(Image.frombuffer("L", size, buf)) - return img - - @property - def bbox(self): - """Computes the exact bounding box of an outline. - - Returns: - A tuple of ``(xMin, yMin, xMax, yMax)``. - """ - bbox = FT_BBox() - outline = self.outline() - FT_Outline_Get_BBox(ctypes.byref(outline), ctypes.byref(bbox)) - return (bbox.xMin / 64.0, bbox.yMin / 64.0, bbox.xMax / 64.0, bbox.yMax / 64.0) - - @property - def cbox(self): - """Returns an outline's ‘control box’. - - Returns: - A tuple of ``(xMin, yMin, xMax, yMax)``. - """ - cbox = FT_BBox() - outline = self.outline() - FT_Outline_Get_CBox(ctypes.byref(outline), ctypes.byref(cbox)) - return (cbox.xMin / 64.0, cbox.yMin / 64.0, cbox.xMax / 64.0, cbox.yMax / 64.0) - - def _moveTo(self, pt): - contour = Contour([], []) - self.contours.append(contour) - contour.points.append(pt) - contour.tags.append(FT_CURVE_TAG_ON) - - def _lineTo(self, pt): - if not (self.contours and len(self.contours[-1].points) > 0): - raise PenError("Contour missing required initial moveTo") - contour = self.contours[-1] - contour.points.append(pt) - contour.tags.append(FT_CURVE_TAG_ON) - - def _curveToOne(self, p1, p2, p3): - if not (self.contours and len(self.contours[-1].points) > 0): - raise PenError("Contour missing required initial moveTo") - t1, t2, t3 = FT_CURVE_TAG_CUBIC, FT_CURVE_TAG_CUBIC, FT_CURVE_TAG_ON - contour = self.contours[-1] - for p, t in ((p1, t1), (p2, t2), (p3, t3)): - contour.points.append(p) - contour.tags.append(t) - - def _qCurveToOne(self, p1, p2): - if not (self.contours and len(self.contours[-1].points) > 0): - raise PenError("Contour missing required initial moveTo") - t1, t2 = FT_CURVE_TAG_CONIC, FT_CURVE_TAG_ON - contour = self.contours[-1] - for p, t in ((p1, t1), (p2, t2)): - contour.points.append(p) - contour.tags.append(t) diff --git a/spaces/codelion/Grounding_DINO_demo/README.md b/spaces/codelion/Grounding_DINO_demo/README.md deleted file mode 100644 index 081e39d1a209013fc2a5342efc9b1307923488c8..0000000000000000000000000000000000000000 --- a/spaces/codelion/Grounding_DINO_demo/README.md +++ /dev/null @@ -1,126 +0,0 @@ ---- -title: Grounding DINO Demo -emoji: 💻 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -# Grounding DINO -[📃Paper](https://arxiv.org/abs/2303.05499) | -[📽️Video](https://www.youtube.com/watch?v=wxWDt5UiwY8) | -[🗯️ Github](https://github.com/IDEA-Research/GroundingDINO) | -[📯Demo on Colab](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-grounding-dino.ipynb) | -[🤗Demo on HF (Coming soon)]() - -[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-grounding-dino.ipynb) \ -[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/grounding-dino-marrying-dino-with-grounded/zero-shot-object-detection-on-mscoco)](https://paperswithcode.com/sota/zero-shot-object-detection-on-mscoco?p=grounding-dino-marrying-dino-with-grounded) \ -[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/grounding-dino-marrying-dino-with-grounded/zero-shot-object-detection-on-odinw)](https://paperswithcode.com/sota/zero-shot-object-detection-on-odinw?p=grounding-dino-marrying-dino-with-grounded) \ -[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/grounding-dino-marrying-dino-with-grounded/object-detection-on-coco-minival)](https://paperswithcode.com/sota/object-detection-on-coco-minival?p=grounding-dino-marrying-dino-with-grounded) \ -[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/grounding-dino-marrying-dino-with-grounded/object-detection-on-coco)](https://paperswithcode.com/sota/object-detection-on-coco?p=grounding-dino-marrying-dino-with-grounded) - - - -Official pytorch implementation of [Grounding DINO](https://arxiv.org/abs/2303.05499), a stronger open-set object detector. Code is available now! - - -## Highlight - -- **Open-Set Detection.** Detect **everything** with language! -- **High Performancce.** COCO zero-shot **52.5 AP** (training without COCO data!). COCO fine-tune **63.0 AP**. -- **Flexible.** Collaboration with Stable Diffusion for Image Editting. - -## News -[2023/03/27] Support CPU-only mode. Now the model can run on machines without GPUs.\ -[2023/03/25] A [demo](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-grounding-dino.ipynb) for Grounding DINO is available at Colab. Thanks to @Piotr! \ -[2023/03/22] Code is available Now! - - - -## TODO - -- [x] Release inference code and demo. -- [x] Release checkpoints. -- [ ] Grounding DINO with Stable Diffusion and GLIGEN demos. -- [ ] Release training codes. - -## Install - -If you have a CUDA environment, please make sure the environment variable `CUDA_HOME` is set. It will be compiled under CPU-only mode if no CUDA available. - -```bash -pip install -e . -``` - -## Demo - -```bash -CUDA_VISIBLE_DEVICES=6 python demo/inference_on_a_image.py \ - -c /path/to/config \ - -p /path/to/checkpoint \ - -i .asset/cats.png \ - -o "outputs/0" \ - -t "cat ear." \ - [--cpu-only] # open it for cpu mode -``` -See the `demo/inference_on_a_image.py` for more details. - -## Checkpoints - - - - - - - - - - - - - - - - - - - - - - - - - -
    namebackboneDatabox AP on COCOCheckpointConfig
    1GroundingDINO-TSwin-TO365,GoldG,Cap4M48.4 (zero-shot) / 57.2 (fine-tune)linklink
    - - - -## Acknowledgement - -Our model is related to [DINO](https://github.com/IDEA-Research/DINO) and [GLIP](https://github.com/microsoft/GLIP). Thanks for their great work! - -We also thank great previous work including DETR, Deformable DETR, SMCA, Conditional DETR, Anchor DETR, Dynamic DETR, DAB-DETR, DN-DETR, etc. More related work are available at [Awesome Detection Transformer](https://github.com/IDEACVR/awesome-detection-transformer). A new toolbox [detrex](https://github.com/IDEA-Research/detrex) is available as well. - -Thanks [Stable Diffusion](https://github.com/Stability-AI/StableDiffusion) and [GLIGEN](https://github.com/gligen/GLIGEN) for their awesome models. - - -## Citation - -If you find our work helpful for your research, please consider citing the following BibTeX entry. - -```bibtex -@inproceedings{ShilongLiu2023GroundingDM, - title={Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection}, - author={Shilong Liu and Zhaoyang Zeng and Tianhe Ren and Feng Li and Hao Zhang and Jie Yang and Chunyuan Li and Jianwei Yang and Hang Su and Jun Zhu and Lei Zhang}, - year={2023} -} -``` - - - - - diff --git a/spaces/colakin/video-generater/public/ffmpeg/compat/w32pthreads.h b/spaces/colakin/video-generater/public/ffmpeg/compat/w32pthreads.h deleted file mode 100644 index 6405e72b64f70b15e284c19734c26361846e95b9..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/compat/w32pthreads.h +++ /dev/null @@ -1,191 +0,0 @@ -/* - * Copyright (C) 2010-2011 x264 project - * - * Authors: Steven Walters - * Pegasys Inc. - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * w32threads to pthreads wrapper - */ - -#ifndef COMPAT_W32PTHREADS_H -#define COMPAT_W32PTHREADS_H - -/* Build up a pthread-like API using underlying Windows API. Have only static - * methods so as to not conflict with a potentially linked in pthread-win32 - * library. - * As most functions here are used without checking return values, - * only implement return values as necessary. */ - -#define WIN32_LEAN_AND_MEAN -#include -#include -#include - -#include "libavutil/attributes.h" -#include "libavutil/common.h" -#include "libavutil/internal.h" -#include "libavutil/mem.h" -#include "libavutil/time.h" - -typedef struct pthread_t { - void *handle; - void *(*func)(void* arg); - void *arg; - void *ret; -} pthread_t; - -/* use light weight mutex/condition variable API for Windows Vista and later */ -typedef SRWLOCK pthread_mutex_t; -typedef CONDITION_VARIABLE pthread_cond_t; - -#define PTHREAD_MUTEX_INITIALIZER SRWLOCK_INIT -#define PTHREAD_COND_INITIALIZER CONDITION_VARIABLE_INIT - -#define InitializeCriticalSection(x) InitializeCriticalSectionEx(x, 0, 0) -#define WaitForSingleObject(a, b) WaitForSingleObjectEx(a, b, FALSE) - -#define PTHREAD_CANCEL_ENABLE 1 -#define PTHREAD_CANCEL_DISABLE 0 - -static av_unused unsigned __stdcall attribute_align_arg win32thread_worker(void *arg) -{ - pthread_t *h = (pthread_t*)arg; - h->ret = h->func(h->arg); - return 0; -} - -static av_unused int pthread_create(pthread_t *thread, const void *unused_attr, - void *(*start_routine)(void*), void *arg) -{ - thread->func = start_routine; - thread->arg = arg; -#if HAVE_WINRT - thread->handle = (void*)CreateThread(NULL, 0, win32thread_worker, thread, - 0, NULL); -#else - thread->handle = (void*)_beginthreadex(NULL, 0, win32thread_worker, thread, - 0, NULL); -#endif - return !thread->handle; -} - -static av_unused int pthread_join(pthread_t thread, void **value_ptr) -{ - DWORD ret = WaitForSingleObject(thread.handle, INFINITE); - if (ret != WAIT_OBJECT_0) { - if (ret == WAIT_ABANDONED) - return EINVAL; - else - return EDEADLK; - } - if (value_ptr) - *value_ptr = thread.ret; - CloseHandle(thread.handle); - return 0; -} - -static inline int pthread_mutex_init(pthread_mutex_t *m, void* attr) -{ - InitializeSRWLock(m); - return 0; -} -static inline int pthread_mutex_destroy(pthread_mutex_t *m) -{ - /* Unlocked SWR locks use no resources */ - return 0; -} -static inline int pthread_mutex_lock(pthread_mutex_t *m) -{ - AcquireSRWLockExclusive(m); - return 0; -} -static inline int pthread_mutex_unlock(pthread_mutex_t *m) -{ - ReleaseSRWLockExclusive(m); - return 0; -} - -typedef INIT_ONCE pthread_once_t; -#define PTHREAD_ONCE_INIT INIT_ONCE_STATIC_INIT - -static av_unused int pthread_once(pthread_once_t *once_control, void (*init_routine)(void)) -{ - BOOL pending = FALSE; - InitOnceBeginInitialize(once_control, 0, &pending, NULL); - if (pending) - init_routine(); - InitOnceComplete(once_control, 0, NULL); - return 0; -} - -static inline int pthread_cond_init(pthread_cond_t *cond, const void *unused_attr) -{ - InitializeConditionVariable(cond); - return 0; -} - -/* native condition variables do not destroy */ -static inline int pthread_cond_destroy(pthread_cond_t *cond) -{ - return 0; -} - -static inline int pthread_cond_broadcast(pthread_cond_t *cond) -{ - WakeAllConditionVariable(cond); - return 0; -} - -static inline int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex) -{ - SleepConditionVariableSRW(cond, mutex, INFINITE, 0); - return 0; -} - -static inline int pthread_cond_timedwait(pthread_cond_t *cond, pthread_mutex_t *mutex, - const struct timespec *abstime) -{ - int64_t abs_milli = abstime->tv_sec * 1000LL + abstime->tv_nsec / 1000000; - DWORD t = av_clip64(abs_milli - av_gettime() / 1000, 0, UINT32_MAX); - - if (!SleepConditionVariableSRW(cond, mutex, t, 0)) { - DWORD err = GetLastError(); - if (err == ERROR_TIMEOUT) - return ETIMEDOUT; - else - return EINVAL; - } - return 0; -} - -static inline int pthread_cond_signal(pthread_cond_t *cond) -{ - WakeConditionVariable(cond); - return 0; -} - -static inline int pthread_setcancelstate(int state, int *oldstate) -{ - return 0; -} - -#endif /* COMPAT_W32PTHREADS_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/pixblockdsp_mmi.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/pixblockdsp_mmi.c deleted file mode 100644 index 1230f5de8869b8a0c9b7a5a72aafa914f00c81ab..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/pixblockdsp_mmi.c +++ /dev/null @@ -1,135 +0,0 @@ -/* - * Loongson SIMD optimized pixblockdsp - * - * Copyright (c) 2015 Loongson Technology Corporation Limited - * Copyright (c) 2015 Zhou Xiaoyong - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "pixblockdsp_mips.h" -#include "libavutil/mips/asmdefs.h" -#include "libavutil/mips/mmiutils.h" - -void ff_get_pixels_8_mmi(int16_t *av_restrict block, const uint8_t *pixels, - ptrdiff_t stride) -{ - double ftmp[7]; - DECLARE_VAR_ALL64; - DECLARE_VAR_ADDRT; - - __asm__ volatile ( - "pxor %[ftmp0], %[ftmp0], %[ftmp0] \n\t" - - MMI_LDC1(%[ftmp1], %[pixels], 0x00) - MMI_LDXC1(%[ftmp2], %[pixels], %[stride], 0x00) - "punpcklbh %[ftmp3], %[ftmp1], %[ftmp0] \n\t" - "punpckhbh %[ftmp4], %[ftmp1], %[ftmp0] \n\t" - "punpcklbh %[ftmp5], %[ftmp2], %[ftmp0] \n\t" - "punpckhbh %[ftmp6], %[ftmp2], %[ftmp0] \n\t" - MMI_SDC1(%[ftmp3], %[block], 0x00) - MMI_SDC1(%[ftmp4], %[block], 0x08) - MMI_SDC1(%[ftmp5], %[block], 0x10) - MMI_SDC1(%[ftmp6], %[block], 0x18) - PTR_ADDU "%[pixels], %[pixels], %[stride_x2] \n\t" - - MMI_LDC1(%[ftmp1], %[pixels], 0x00) - MMI_LDXC1(%[ftmp2], %[pixels], %[stride], 0x00) - "punpcklbh %[ftmp3], %[ftmp1], %[ftmp0] \n\t" - "punpckhbh %[ftmp4], %[ftmp1], %[ftmp0] \n\t" - "punpcklbh %[ftmp5], %[ftmp2], %[ftmp0] \n\t" - "punpckhbh %[ftmp6], %[ftmp2], %[ftmp0] \n\t" - MMI_SDC1(%[ftmp3], %[block], 0x20) - MMI_SDC1(%[ftmp4], %[block], 0x28) - MMI_SDC1(%[ftmp5], %[block], 0x30) - MMI_SDC1(%[ftmp6], %[block], 0x38) - PTR_ADDU "%[pixels], %[pixels], %[stride_x2] \n\t" - - MMI_LDC1(%[ftmp1], %[pixels], 0x00) - MMI_LDXC1(%[ftmp2], %[pixels], %[stride], 0x00) - "punpcklbh %[ftmp3], %[ftmp1], %[ftmp0] \n\t" - "punpckhbh %[ftmp4], %[ftmp1], %[ftmp0] \n\t" - "punpcklbh %[ftmp5], %[ftmp2], %[ftmp0] \n\t" - "punpckhbh %[ftmp6], %[ftmp2], %[ftmp0] \n\t" - MMI_SDC1(%[ftmp3], %[block], 0x40) - MMI_SDC1(%[ftmp4], %[block], 0x48) - MMI_SDC1(%[ftmp5], %[block], 0x50) - MMI_SDC1(%[ftmp6], %[block], 0x58) - PTR_ADDU "%[pixels], %[pixels], %[stride_x2] \n\t" - - MMI_LDC1(%[ftmp1], %[pixels], 0x00) - MMI_LDXC1(%[ftmp2], %[pixels], %[stride], 0x00) - "punpcklbh %[ftmp3], %[ftmp1], %[ftmp0] \n\t" - "punpckhbh %[ftmp4], %[ftmp1], %[ftmp0] \n\t" - "punpcklbh %[ftmp5], %[ftmp2], %[ftmp0] \n\t" - "punpckhbh %[ftmp6], %[ftmp2], %[ftmp0] \n\t" - MMI_SDC1(%[ftmp3], %[block], 0x60) - MMI_SDC1(%[ftmp4], %[block], 0x68) - MMI_SDC1(%[ftmp5], %[block], 0x70) - MMI_SDC1(%[ftmp6], %[block], 0x78) - : [ftmp0]"=&f"(ftmp[0]), [ftmp1]"=&f"(ftmp[1]), - [ftmp2]"=&f"(ftmp[2]), [ftmp3]"=&f"(ftmp[3]), - [ftmp4]"=&f"(ftmp[4]), [ftmp5]"=&f"(ftmp[5]), - [ftmp6]"=&f"(ftmp[6]), - RESTRICT_ASM_ALL64 - RESTRICT_ASM_ADDRT - [pixels]"+&r"(pixels) - : [block]"r"((mips_reg)block), [stride]"r"((mips_reg)stride), - [stride_x2]"r"((mips_reg)(stride<<1)) - : "memory" - ); -} - -void ff_diff_pixels_mmi(int16_t *av_restrict block, const uint8_t *src1, - const uint8_t *src2, ptrdiff_t stride) -{ - double ftmp[5]; - mips_reg tmp[1]; - DECLARE_VAR_ALL64; - - __asm__ volatile ( - "li %[tmp0], 0x08 \n\t" - "pxor %[ftmp4], %[ftmp4], %[ftmp4] \n\t" - "1: \n\t" - MMI_LDC1(%[ftmp0], %[src1], 0x00) - "por %[ftmp1], %[ftmp0], %[ftmp0] \n\t" - MMI_LDC1(%[ftmp2], %[src2], 0x00) - "por %[ftmp3], %[ftmp2], %[ftmp2] \n\t" - "punpcklbh %[ftmp0], %[ftmp0], %[ftmp4] \n\t" - "punpckhbh %[ftmp1], %[ftmp1], %[ftmp4] \n\t" - "punpcklbh %[ftmp2], %[ftmp2], %[ftmp4] \n\t" - "punpckhbh %[ftmp3], %[ftmp3], %[ftmp4] \n\t" - "psubh %[ftmp0], %[ftmp0], %[ftmp2] \n\t" - "psubh %[ftmp1], %[ftmp1], %[ftmp3] \n\t" - MMI_SDC1(%[ftmp0], %[block], 0x00) - MMI_SDC1(%[ftmp1], %[block], 0x08) - PTR_ADDI "%[tmp0], %[tmp0], -0x01 \n\t" - PTR_ADDIU "%[block], %[block], 0x10 \n\t" - PTR_ADDU "%[src1], %[src1], %[stride] \n\t" - PTR_ADDU "%[src2], %[src2], %[stride] \n\t" - "bgtz %[tmp0], 1b \n\t" - : [ftmp0]"=&f"(ftmp[0]), [ftmp1]"=&f"(ftmp[1]), - [ftmp2]"=&f"(ftmp[2]), [ftmp3]"=&f"(ftmp[3]), - [ftmp4]"=&f"(ftmp[4]), - [tmp0]"=&r"(tmp[0]), - RESTRICT_ASM_ALL64 - [block]"+&r"(block), [src1]"+&r"(src1), - [src2]"+&r"(src2) - : [stride]"r"((mips_reg)stride) - : "memory" - ); -} diff --git a/spaces/congsaPfin/Manga-OCR/logs/Air Attack v4.52 MOD APK The Ultimate Arcade Game with Unlimited Life.md b/spaces/congsaPfin/Manga-OCR/logs/Air Attack v4.52 MOD APK The Ultimate Arcade Game with Unlimited Life.md deleted file mode 100644 index a5bb19706d3d44eb49742b046ecd01c78b080f62..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Air Attack v4.52 MOD APK The Ultimate Arcade Game with Unlimited Life.md +++ /dev/null @@ -1,110 +0,0 @@ - -

    Air Attack Mod APK Unlimited Life: A Review

    -

    If you are looking for a thrilling and action-packed arcade game, you might want to check out Air Attack Mod APK Unlimited Life. This is a modified version of the original Air Attack game, which gives you unlimited gold coins, unlimited lives, and no ads. In this article, we will review this modded game and tell you how to download and install it on your Android device.

    -

    What is Air Attack Mod APK?

    -

    Air Attack is a classic arcade game that lets you fly a fighter plane and shoot down enemy aircrafts, tanks, ships, and buildings. You can choose from different planes, weapons, and upgrades to customize your gameplay. You can also play in different modes, such as campaign, survival, and multiplayer.

    -

    air attack mod apk unlimited life


    Download Zip ===== https://urlca.com/2uO94C



    -

    Air Attack Mod APK is a hacked version of the original game that gives you some extra benefits. With this modded game, you can enjoy unlimited gold coins, unlimited lives, and no ads. This means you can buy anything you want in the game store, play as long as you want without losing lives, and enjoy a smooth and uninterrupted gaming experience.

    -

    Features of Air Attack Mod APK

    -

    Unlimited gold coins

    -

    Gold coins are the main currency in Air Attack. You can use them to buy new planes, weapons, upgrades, and power-ups. Normally, you have to earn gold coins by playing the game or watching ads. But with Air Attack Mod APK Unlimited Life, you can get unlimited gold coins for free. You can spend them as much as you want without worrying about running out.

    -

    Unlimited lives

    -

    Lives are the number of times you can play the game before you have to start over. Normally, you have to be careful not to get hit by enemy fire or crash into obstacles. If you lose all your lives, you have to wait for them to regenerate or buy more with gold coins. But with Air Attack Mod APK Unlimited Life, you can get unlimited lives for free. You can play as long as you want without losing lives or waiting for them to refill.

    -

    No ads

    -

    Ads are the annoying pop-ups that interrupt your gameplay and waste your time. Normally, you have to watch ads to earn gold coins or get extra lives. But with Air Attack Mod APK Unlimited Life, you can get rid of all the ads for free. You can enjoy a smooth and uninterrupted gaming experience without any distractions.

    -

    How to download and install Air Attack Mod APK?

    -

    If you want to download and install Air Attack Mod APK Unlimited Life on your Android device, you have to follow these simple steps:

    -

    air attack mod apk unlimited life and money
    -air attack mod apk unlimited life and coins
    -air attack mod apk unlimited life and ammo
    -air attack mod apk unlimited life and bombs
    -air attack mod apk unlimited life and fuel
    -air attack mod apk unlimited life and stars
    -air attack mod apk unlimited life and missiles
    -air attack mod apk unlimited life and health
    -air attack mod apk unlimited life and gems
    -air attack mod apk unlimited life and diamonds
    -air attack mod apk unlimited life and weapons
    -air attack mod apk unlimited life and planes
    -air attack mod apk unlimited life and upgrades
    -air attack mod apk unlimited life and levels
    -air attack mod apk unlimited life and stages
    -air attack mod apk unlimited life and missions
    -air attack mod apk unlimited life and achievements
    -air attack mod apk unlimited life and medals
    -air attack mod apk unlimited life and rewards
    -air attack mod apk unlimited life and bonuses
    -air attack mod apk unlimited life and skins
    -air attack mod apk unlimited life and modes
    -air attack mod apk unlimited life and features
    -air attack mod apk unlimited life and cheats
    -air attack mod apk unlimited life and hacks
    -air attack mod apk unlimited life download free
    -air attack mod apk unlimited life download latest version
    -air attack mod apk unlimited life download for android
    -air attack mod apk unlimited life download 2023
    -air attack mod apk unlimited life download link
    -air attack mod apk unlimited life download offline
    -air attack mod apk unlimited life download online
    -air attack mod apk unlimited life download no root
    -air attack mod apk unlimited life download no ads
    -air attack mod apk unlimited life download no virus
    -air attack mod apk unlimited life download safe
    -air attack mod apk unlimited life download secure
    -air attack mod apk unlimited life download fast
    -air attack mod apk unlimited life download easy
    -air attack mod apk unlimited life download high quality
    -how to get air attack mod apk unlimited life
    -how to install air attack mod apk unlimited life
    -how to play air attack mod apk unlimited life
    -how to use air attack mod apk unlimited life
    -how to update air attack mod apk unlimited life
    -how to uninstall air attack mod apk unlimited life
    -how to hack air attack mod apk unlimited life
    -how to cheat in air attack mod apk unlimited life
    -how to enjoy air attack mod apk unlimited life

    -

    Step 1: Download the APK file

    -

    The first step is to download the APK file of Air Attack Mod APK Unlimited Life from a reliable source. You can use this link to download the file directly to your device.

    -

    Step 2: Enable unknown sources

    -

    The next step is to enable unknown sources on your device. This will allow you to install apps that are not from the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.

    -

    Step 3: Install the APK file

    -

    The third step is to install the APK file on your device. To do this, locate the downloaded file in your file manager and tap on it

    A pop-up window will appear asking you to confirm the installation. Tap on Install and wait for the process to finish.

    -

    Step 4: Launch the game and enjoy

    -

    The final step is to launch the game and enjoy. To do this, go to your app drawer and tap on the Air Attack icon. You can now play the game with unlimited gold coins, unlimited lives, and no ads.

    -

    Pros and cons of Air Attack Mod APK

    -

    Like any other modded game, Air Attack Mod APK Unlimited Life has its pros and cons. Here are some of them:

    -

    Pros

    -

    Fun and addictive gameplay

    -

    Air Attack Mod APK Unlimited Life offers a fun and addictive gameplay that will keep you hooked for hours. You can fly a fighter plane and shoot down enemy aircrafts, tanks, ships, and buildings. You can also play in different modes, such as campaign, survival, and multiplayer. You can challenge yourself with different levels of difficulty and missions.

    -

    Stunning graphics and sound effects

    -

    Air Attack Mod APK Unlimited Life has stunning graphics and sound effects that will make you feel like you are in a real war zone. You can enjoy the realistic 3D environments, animations, and explosions. You can also hear the roaring of the engines, the firing of the guns, and the screams of the enemies.

    -

    Easy to control and customize

    -

    Air Attack Mod APK Unlimited Life is easy to control and customize. You can use the touch screen or the accelerometer to control your plane. You can also adjust the sensitivity and the sound settings according to your preference. You can also choose from different planes, weapons, upgrades, and power-ups to customize your gameplay.

    -

    Cons

    -

    Requires internet connection

    -

    Air Attack Mod APK Unlimited Life requires an internet connection to play. This means you cannot play it offline or in areas with poor network coverage. This can be a problem if you want to play the game without any interruptions or data charges.

    -

    May not be compatible with some devices

    -

    Air Attack Mod APK Unlimited Life may not be compatible with some devices. This means you may encounter some errors or glitches while playing the game on your device. This can be a problem if you want to enjoy the game without any issues or crashes.

    -

    Conclusion

    -

    Air Attack Mod APK Unlimited Life is a modified version of the original Air Attack game that gives you unlimited gold coins, unlimited lives, and no ads. It is a thrilling and action-packed arcade game that lets you fly a fighter plane and shoot down enemy aircrafts, tanks, ships, and buildings. It has fun and addictive gameplay, stunning graphics and sound effects, and easy to control and customize features. However, it also requires an internet connection to play and may not be compatible with some devices. If you want to try this modded game, you can download it from this link and follow the steps we have provided above.

    -

    FAQs

    -

    Here are some frequently asked questions about Air Attack Mod APK Unlimited Life:

    -
      -
    • Is Air Attack Mod APK Unlimited Life safe to download?
    • -

      Yes, Air Attack Mod APK Unlimited Life is safe to download as long as you use a reliable source like this link. However, you should always be careful when downloading any modded game from unknown sources as they may contain viruses or malware that can harm your device.

      -
    • Is Air Attack Mod APK Unlimited Life legal to use?
    • -

      No, Air Attack Mod APK Unlimited Life is not legal to use as it violates the terms and conditions of the original game developer. By using this modded game, you are breaking the rules and risking your account being banned or suspended. Therefore, we do not recommend using this modded game for any purposes.

      -
    • Can I play Air Attack Mod APK Unlimited Life with my friends?
    • -

      Yes, you can play Air Attack Mod APK Unlimited Life with your friends online. You can join or create a room in multiplayer mode and invite your friends to join you. You can also chat with them in real-time and compete with them in different missions.

      -
    • Can I update Air Attack Mod APK Unlimited Life?
    • -

      No, you cannot update Air Attack Mod APK Unlimited Life as it is a modded version of the original game. If you try to update it from the Google Play Store or any other source, you will lose all the modded features and revert back to the original version. Therefore, we advise you not to update this modded game unless there is a new version of the modded game available from the same source.

      -
    • What are some alternatives to Air Attack Mod APK Unlimited Life?
    • -

      If you are looking for some alternatives to Air Attack Mod APK Unlimited Life, you can try these games:

      - - - - - -
      NameDescription
      Sky Force ReloadedA classic arcade shooter game that lets you fly a fighter plane and blast your enemies with various weapons and power-ups. You can also upgrade your plane and collect cards and achievements.
      Galaxy Attack: Alien ShooterA space shooter game that lets you fly a spaceship and defend the Earth from alien invaders. You can also upgrade your spaceship and weapons and play in different modes and levels.
      1945 Air ForceA retro-style arcade shooter game that lets you fly a warplane and fight against the Axis forces in World War II. You can also collect and upgrade different planes and weapons and play in different modes and missions.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Cars Games and Enjoy the Variety of Tracks and Vehicles in GT Racing 2.md b/spaces/congsaPfin/Manga-OCR/logs/Download Cars Games and Enjoy the Variety of Tracks and Vehicles in GT Racing 2.md deleted file mode 100644 index 98cfd417853cb27125cba2f7751acc5759b68f3a..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Cars Games and Enjoy the Variety of Tracks and Vehicles in GT Racing 2.md +++ /dev/null @@ -1,181 +0,0 @@ - -

      Download Cars Games: How to Find and Play the Best Racing Games on Your PC or Mobile Device

      -

      Do you love speed, adrenaline, and competition? Do you enjoy driving fast cars, trucks, motorcycles, or even futuristic vehicles? If you answered yes, then you might be interested in cars games. Cars games are video games that involve racing against other players or the computer on various tracks, roads, or terrains. Cars games are one of the most popular genres of video games, as they appeal to a wide range of audiences, from casual gamers to hardcore enthusiasts. Whether you want to experience realistic driving physics, colorful graphics, or fun gameplay mechanics, there is a cars game for you.

      -

      In this article, we will guide you through the different types of cars games, the platforms you can play them on, how to download them, how to play them, and what are some of the best cars games to try out. By the end of this article, you will be ready to rev up your engines and hit the asphalt.

      -

      download cars games


      Download Zip ---> https://urlca.com/2uOgsA



      -

      Types of Cars Games

      -

      Cars games can be classified into three main subgenres: sim racing, kart racing, and futuristic racing. Each subgenre has its own characteristics, advantages, and disadvantages. Let's take a look at each one.

      -

      Sim Racing

      -

      Sim racing stands for simulation racing. This subgenre aims to provide a realistic representation of driving and racing. Sim racing games feature licensed cars and motorbikes from real manufacturers, authentic tracks and locations from around the world, accurate physics and handling models, and detailed graphics and sound effects. Sim racing games are ideal for players who want to immerse themselves in the thrill of racing and learn how to control different vehicles in various conditions. Some examples of sim racing games are F1 23, Gran Turismo Sport, and Forza Motorsport 7.

      -

      Kart Racing

      -

      Kart racing is a subgenre that focuses on arcade-style racing with power-ups and weapons. Kart racing games feature cartoonish graphics, exaggerated physics, and humorous elements. Kart racing games are suitable for players who want to have fun and enjoy casual gameplay with friends or family. Some examples of kart racing games are Mario Kart 8 Deluxe, Crash Team Racing Nitro-Fueled, and Sonic & All-Stars Racing Transformed.

      -

      Futuristic Racing

      -

      Futuristic racing is a subgenre that involves racing in sci-fi settings with advanced vehicles. Futuristic racing games feature high-speed action, stunning visuals, and innovative gameplay mechanics. Futuristic racing games are perfect for players who want to explore new worlds and experience exhilarating sensations. Some examples of futuristic racing games are Wipeout Omega Collection, Redout, and F-Zero GX.

      -

      Platforms for Cars Games

      -

      Cars games can be played on various platforms, such as PC (personal computer), mobile (smartphone or tablet), console (PlayStation, Xbox, Nintendo), or VR (virtual reality). Each platform has its own advantages and disadvantages when it comes to playing cars games. Let's compare two of the most common platforms: PC and mobile.

      -

      PC

      -

      PC is a platform that allows you to play cars games on your computer, either with a keyboard, mouse, or controller. PC has some advantages over other platforms, such as:

      -
        -
      • Higher performance: PC can run cars games at higher resolutions, frame rates, and graphics settings, resulting in smoother and sharper gameplay.
      • -
      • More customization: PC can let you adjust various options and settings to suit your preferences and needs, such as controls, audio, video, and difficulty.
      • -
      • More variety: PC can offer you access to a wider range of cars games, from indie titles to AAA games, from old classics to new releases.
      • -
      -

      However, PC also has some disadvantages, such as:

      -
        -
      • Higher cost: PC can require you to invest more money in buying or upgrading your hardware and software components, such as CPU, GPU, RAM, storage, operating system, drivers, and antivirus.
      • -
      • More complexity: PC can involve more steps and challenges in installing, launching, and running cars games, such as compatibility issues, bugs, crashes, and errors.
      • -
      • Less portability: PC can limit you to playing cars games in a fixed location, such as your home or office, unless you have a laptop or a gaming laptop.
      • -
      -

      Mobile

      -

      Mobile is a platform that enables you to play cars games on your smartphone or tablet, either with touch screen or tilt controls. Mobile has some advantages over other platforms, such as:

      -

      download asphalt 8 car racing game
      -download gt racing 2 real car game
      -download real moto 2 motorcycle game
      -download carx drift racing game
      -download drift legends real car racing game
      -download racing games for pc from epic games store
      -download racing games for android from google play
      -download free online racing games for pc
      -download multiplayer racing games for pc
      -download racing simulators for pc
      -download kart racing games for pc
      -download futuristic racing games for pc
      -download track racing games for pc
      -download street racing games for pc
      -download off-road racing games for pc
      -download licensed cars and motorbikes games for pc
      -download action-packed races games for pc
      -download 75+ tracks racing games for pc
      -download single and multiplayer racing modes games for pc
      -download different seasons and live events racing games for pc
      -download limited-time cups and prizes racing games for pc
      -download world series and racing events games for pc
      -download customizable racer avatar games for pc
      -download airborne and stunt racing games for pc
      -download control customization racing games for pc
      -download retro racers 2 game for pc
      -download f1 23 game for pc
      -download madcar f1 multiplayer game for pc
      -download cyber drift game for pc
      -download moto game for pc
      -download fpvsim fpv simulator game for pc
      -download tuk tuk race game for pc
      -download futuregrind game for pc
      -download lego 2k drive game for pc
      -download flyto game for pc
      -download bus driver simulator game for pc
      -download gpro classic racing manager game for pc
      -download duck life 8 adventure game for pc
      -download forklift extreme deluxe edition game for pc
      -download wreckfest game for pc
      -download mashed game for pc
      -download nhra championship drag racing speed for all game for pc
      -download space haste 2 game for pc
      -download need for speed unbound standard edition game for pc
      -download dakar desert rally game for pc
      -download the crew standard edition game for pc
      -download wrc generations game for pc
      -download funtasia game for pc
      -download midnight legends game for pc

      -
        -
      • Lower cost: Mobile can allow you to play cars games for free or for a low price, as most of them are available on app stores or websites.
      • -
      • Less complexity: Mobile can make it easier for you to download, install, and play cars games, as most of them are designed to be user-friendly and compatible with your device.
      • -
      • More portability: Mobile can let you play cars games anywhere and anytime, as long as you have your device and an internet connection.
      • -
      -

      However, mobile also has some disadvantages, such as:

      -
        -
      • Lower performance: Mobile can run cars games at lower resolutions, frame rates, and graphics settings, resulting in less smooth and less detailed gameplay.
      • -
      • Less customization: Mobile can offer you fewer options and settings to modify your gaming experience, such as controls, audio, video, and difficulty.
      • -
      • Less variety: Mobile can provide you with a smaller selection of cars games, as most of them are casual or simplified versions of PC or console games.
      • -
      -

      How to Download Cars Games

      -

      Depending on the platform you choose to play cars games on, there are different ways to download them. Here are the steps for downloading cars games on PC and mobile.

      -

      PC

      -

      To download cars games on PC, you have two main options: online stores or websites. Online stores are platforms that sell digital copies of cars games that you can buy and download directly to your computer. Some examples of online stores are Steam, Epic Games Store, and GOG.com. Websites are platforms that offer free or paid downloads of cars games that you can get from various sources. Some examples of websites are GameTop, My Real Games, and Softonic. Here are the steps for downloading cars games from online stores or websites:

      -
        -
      1. Browse the online store or website of your choice and look for the cars game you want to download.
      2. -
      3. Check the system requirements and the price of the cars game before downloading it.
      4. -
      5. Create an account or log in to the online store or website if needed.
      6. -
      7. Add the cars game to your cart or library and proceed to checkout or payment if required.
      8. -
      9. Download the cars game installer or launcher to your computer and run it.
      10. -
      11. Follow the instructions on the screen to install or launch the cars game on your computer.
      12. -
      13. Enjoy playing the cars game on your PC.
      14. -
      -

      Mobile

      -

      To download cars games on mobile, you have two main options: app stores or websites. App stores are platforms that sell or distribute apps of cars games that you can download directly to your smartphone or tablet. Some examples of app stores are Google Play Store, Apple App Store, and Amazon Appstore. Websites are platforms that offer free or paid downloads of apps or APK files of cars games that you can get from various sources. Some examples of websites are APKPure, APKMirror, and Uptodown. Here are the steps for downloading cars games from app stores or websites:

      -
        -
      1. Browse the app store or website of your choice and look for the cars game you want to download.
      2. -
      3. Check the ratings, reviews, and permissions of the cars game before downloading it.
      4. -
      5. Tap on the download or install button to start downloading the cars game to your device.
      6. -
      7. Wait for the download to finish and open the cars game app or APK file on your device.
      8. -
      9. Follow the instructions on the screen to install or launch the cars game on your device.
      10. -
      11. Enjoy playing the cars game on your mobile.
      12. -
      -

      How to Play Cars Games

      -

      Once you have downloaded and installed the cars game of your choice, you can start playing it. However, playing cars games can be challenging or frustrating if you don't know how to control your vehicle or how to win races. Here are some tips and tricks for playing cars games on PC and mobile.

      -

      PC

      -

      To play cars games on PC, you can use a keyboard, mouse, or controller. Each input device has its own advantages and disadvantages, so you should choose the one that suits your style and comfort. Here are some tips and tricks for playing cars games with a keyboard, mouse, or controller:

      -
        -
      • Keyboard: A keyboard is a common input device that allows you to use different keys to steer, accelerate, brake, and use other functions in cars games. A keyboard is easy to use and accessible, but it can be less precise and responsive than a mouse or controller. To play cars games with a keyboard, you should learn the default key bindings or customize them to your liking. You should also practice using the arrow keys or WASD keys to control your vehicle smoothly and accurately.
      • -
      • Mouse: A mouse is an input device that allows you to use a cursor and buttons to interact with cars games. A mouse is more precise and responsive than a keyboard, but it can be less comfortable and intuitive than a controller. To play cars games with a mouse, you should adjust the sensitivity and acceleration settings to your preference. You should also practice using the left and right buttons to steer, accelerate, brake, and use other functions in cars games.
      • -
      • Controller: A controller is an input device that allows you to use analog sticks, triggers, buttons, and other features to play cars games. A controller is more comfortable and intuitive than a keyboard or mouse, but it can be more expensive and require additional software or drivers. To play cars games with a controller, you should connect it to your PC via USB or Bluetooth. You should also learn the default button mappings or customize them to your liking. You should also practice using the analog sticks, triggers, buttons, and other features to control your vehicle smoothly and accurately.
      • -
      -

      Mobile

      -

      To play cars games on mobile, you can use touch screen or tilt controls. Touch screen controls allow you to tap, swipe, drag, and pinch on your device's screen to play cars games. Tilt controls allow you to tilt your device left or right to steer your vehicle in cars games. Both types of controls have their pros and cons, so you should choose the one that suits your style and comfort. Here are some tips and tricks for playing cars games with touch screen or tilt controls:

      -
        -
      • Touch screen: Touch screen controls are easy to use and accessible, but they can block your view of the game or cause accidental inputs. To play cars games with touch screen controls, you should adjust the size and position of the buttons or icons on your screen. You should also practice tapping, swiping, dragging, and pinching on your screen to steer, accelerate, brake, and use other functions in cars games.
      • -
      • Tilt: Tilt controls are more immersive and realistic than touch screen controls, but they can be less precise and stable than touch screen controls. To play cars games with tilt controls, you should calibrate your device's accelerometer before starting the game. You should also practice tilting your device left or right to steer your vehicle in cars games.
      • -
      -

      Best Cars Games to Download and Play

      -

      Now that you know how to download and play cars games on PC and mobile, you might be wondering what are some of the best cars games to try out. There are hundreds of cars games available on different platforms, but not all of them are worth your time and money. To help you find the best cars games for PC and mobile, we have compiled a table with some of the most popular and popular cars games for PC and mobile. The table includes the names, ratings, features, and links of the games. | Name | Rating | Features | Links | | --- | --- | --- | --- | | Forza Horizon 5 | 9.1/10 | - Open-world racing in Mexico with dynamic seasons and weather - Hundreds of cars to customize and drive - Various modes and events to participate in solo or online - Stunning graphics and sound effects | [PC](^1^), [Xbox](https://www.xbox.com/en-US/games/forza-horizon-5) | | Dirt 5 | 7.9/10 | - Off-road racing on various terrains and locations - Over 70 vehicles to choose from - Career mode with voice acting and story - Online and split-screen multiplayer modes - Playgrounds mode to create and share custom tracks | [PC](^2^), [PS4](https://store.playstation.com/en-us/product/UP4001-CUSA16194_00-DIRT5FULLGAME000), [PS5](https://store.playstation.com/en-us/product/UP4001-PPSA01521_00-DIRT5FULLGAME000), [Xbox One](https://www.microsoft.com/en-us/p/dirt-5/9n0wzv3qzq0c?activetab=pivot:overviewtab), [Xbox Series X/S](https://www.microsoft.com/en-us/p/dirt-5-xbox-series-xs/9n0wzv3qzq0c?activetab=pivot:overviewtab) | | F1 2022 | 8.6/10 | - Official Formula One racing game with licensed teams, drivers, and circuits - Realistic simulation of driving and racing physics - Career mode with story and customization - Online mode with ranked and unranked races - Braking Point mode to experience a narrative-driven story | [PC](^3^), [PS4](https://store.playstation.com/en-us/product/UP4001-CUSA26732_00-F120210000000000), [PS5](https://store.playstation.com/en-us/product/UP4001-PPSA02947_00-F120210000000000), [Xbox One](https://www.microsoft.com/en-us/p/f1-2021-xbox-one/9p6xhjgkxkxh?activetab=pivot:overviewtab), [Xbox Series X/S](https://www.microsoft.com/en-us/p/f1-2021-xbox-series-xs/9p6xhjgkxkxh?activetab=pivot:overviewtab) | | NFS Heat | 7.2/10 | - Street racing in a fictional open-world city - Day and night cycle with different modes and rewards - Over 120 cars to customize and upgrade - Online mode with up to 16 players - Cop chases and pursuits | [PC], [PS4](https://store.playstation.com/en-us/product/UP0006-CUSA15090_00-NFS2000MASTER000), [Xbox One](https://www.microsoft.com/en-us/p/need-for-speed-heat/9p8q2k21b6vg?activetab=pivot:overviewtab) | | Project CARS 3 | 6.8/10 | - Racing game with over 200 cars and 140 tracks - Career mode with progression and customization - Dynamic weather and time of day effects - Online mode with multiplayer races and challenges - VR support for PC | [PC], [PS4](https://store.playstation.com/en-us/product/UP0700-CUSA19665_00-PJC3BASEGAMEUS00), [Xbox One](https://www.microsoft.com/en-us/p/project-cars-3/9n7l8fjv7l8s?activetab=pivot:overviewtab) | | Dirt Rally 2.0 | 8.4/10 | - Rally racing game with realistic physics and handling - Over 50 cars and 100 stages across six locations - Career mode with team management and upgrades - Online mode with daily, weekly, and monthly challenges - VR support for PC | [PC], [PS4](https://store.playstation.com/en-us/product/UP4001-CUSA12819_00-DIRTRALLY2US0000), [Xbox One](https://www.microsoft.com/en-us/p/dirt-rally-20/c2wqnrj46mrv?activetab=pivot:overviewtab) | | Assetto Corsa Competizione | 7.7/10 | - Official GT World Challenge racing game with licensed cars, teams, and tracks - Advanced simulation of driving and racing mechanics - Dynamic weather and day-night cycle - Single-player and multiplayer modes - VR and triple screen support for PC | [PC], [PS4](https://store.playstation.com/en-us/product/UP4040-CUSA17346_00-ACCOMPETIZIONE00), [Xbox One](https://www.microsoft.com/en-us/p/assetto-corsa-competizione/9n5xqzg0q2xv?activetab=pivot:overviewtab) | | Asphalt 9: Legends | 4.6/5 | - Arcade-style racing game with over 60 cars and 80 tracks - Career mode with hundreds of events and challenges - Online mode with multiplayer races and clubs - Customization and upgrade system - Touch drive or manual controls | [Android], [iOS](https://apps.apple.com/us/app/asphalt-9-legends/id805603214), [Windows](https://www.microsoft.com/en-us/p/asphalt-9-legends/9nzqpt0mwtd0?activetab=pivot:overviewtab) | | Real Racing 3 | 4.4/5 | - Realistic racing game with over 250 cars and 40 tracks - Career mode with thousands of events and cups - Online mode with real-time multiplayer races and leaderboards - Time-shifted multiplayer mode to race against friends or strangers - Customization and upgrade system | [Android], [iOS](https://apps.apple.com/us/app/real-racing-3/id556164008) | | CSR Racing 2 | 4.6/5 | - Drag racing game with over 200 cars and various locations - Career mode with story and crew battles - Online mode with live races and events - Customization and tuning system - AR mode to view cars in real life | [Android], [iOS](https://apps.apple.com/us/app/csr-racing-2/id887947640) | | Hill Climb Racing 2 | 4.3/5 | - Physics-based racing game with various vehicles and terrains - Adventure mode with endless levels and challenges - Online mode with cups and leagues - Customization and upgrade system - Funny graphics and sound effects | [Android], [iOS](https://apps.apple.com/us/app/hill-climb-racing-2/id1146465836) | | Traffic Rider | 4.4/5 | - Motorcycle racing game with over 30 bikes and various roads - Career mode with over 70 missions and achievements - Endless mode with different modes and objectives - First-person view and realistic sound effects - Day-night cycle and weather effects | [Android], [iOS](https://apps.apple.com/us/app/traffic-rider/id951744068) |

      Conclusion

      -

      Cars games are video games that involve racing against other players or the computer on various tracks, roads, or terrains. Cars games are one of the most popular genres of video games, as they appeal to a wide range of audiences, from casual gamers to hardcore enthusiasts. Whether you want to experience realistic driving physics, colorful graphics, or fun gameplay mechanics, there is a cars game for you.

      -

      In this article, we have shown you the different types of cars games, the platforms you can play them on, how to download them, how to play them, and what are some of the best cars games to try out. We hope that this article has helped you find and play the best racing games on your PC or mobile device.

      -

      If you are looking for a way to download cars games, you can use the links provided in the table above. If you are looking for a way to play cars games, you can use the tips and tricks provided in this article. If you are looking for a way to have fun and enjoy cars games, you can start playing them right now.

      -

      So what are you waiting for? Download cars games today and start your engines!

      -

      FAQs

      -

      Here are some of the frequently asked questions about cars games:

      -
        -
      1. What are the benefits of playing cars games?
      2. -

        Playing cars games can have various benefits, such as:

        -
          -
        • Improving your hand-eye coordination, reaction time, and spatial awareness.
        • -
        • Enhancing your creativity, problem-solving, and decision-making skills.
        • -
        • Reducing your stress, boredom, and anxiety levels.
        • -
        • Increasing your enjoyment, satisfaction, and confidence.
        • -
        -
      3. What are the drawbacks of playing cars games?
      4. -

        Playing cars games can also have some drawbacks, such as:

        -
          -
        • Spending too much time or money on cars games, which can affect your health, productivity, and finances.
        • -
        • Exposing yourself to violent, inappropriate, or addictive content, which can affect your mood, behavior, and values.
        • -
        • Experiencing technical issues, such as lag, glitches, or errors, which can affect your gaming experience and performance.
        • -
        • Facing online risks, such as cyberbullying, hacking, or phishing, which can affect your privacy and security.
        • -
        -
      5. How to choose the best cars game for me?
      6. -

        To choose the best cars game for you, you should consider the following factors:

        -
          -
        • Your preference: You should choose a cars game that matches your taste and interest, such as the subgenre, the theme, the style, and the features.
        • -
        • Your platform: You should choose a cars game that is compatible with your device and input method, such as the system requirements, the controls, and the graphics.
        • -
        • Your budget: You should choose a cars game that fits your budget and expectations, such as the price, the quality, and the value.
        • -
        • Your feedback: You should choose a cars game that has positive feedback and reviews from other players and critics, such as the ratings, the comments, and the awards.
        • -
        -
      7. How to improve my skills in cars games?
      8. -

        To improve your skills in cars games, you should follow these tips:

        -
          -
        • Practice: You should play cars games regularly and frequently to improve your muscle memory, reflexes, and strategies.
        • -
        • Learn: You should watch tutorials, guides, and videos from experts and professionals to learn new techniques, tips, and tricks.
        • -
        • Challenge: You should try different modes, levels, and opponents to challenge yourself and test your skills.
        • -
        • Analyze: You should review your performance and mistakes to identify your strengths and weaknesses.
        • -
        • Enjoy: You should have fun and enjoy playing cars games without stressing too much about winning or losing.
        • -
        -
      9. How to find more information about cars games?
      10. -

        To find more information about cars games, you can use these resources:

        -
          -
        • Websites: You can visit websites that specialize in cars games or video games in general, such as IGN, GameSpot, or PC Gamer.
        • -
        • Blogs: You can read blogs that cover cars games or video games in general, such as Kotaku, Polygon, or Rock Paper Shotgun.
        • -
        • Podcasts: You can listen to podcasts that discuss cars games or video games in general, such as The Giant Bombcast, The Game Informer Show, or The PC Gaming Show.
        • -
        • Forums: You can join forums that are dedicated to cars games or video games in general, such as Reddit, Steam, or Discord.
        • -
        • Social media: You can follow social media accounts that share news and updates about cars games or video games in general, such as Twitter, Facebook, or Instagram.
        • -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Lep 39s World Download For Pc UPDATED.md b/spaces/congsaPfin/Manga-OCR/logs/Lep 39s World Download For Pc UPDATED.md deleted file mode 100644 index 8c797dc85158fdecaaee1ed9ea08b2453fa86338..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Lep 39s World Download For Pc UPDATED.md +++ /dev/null @@ -1,44 +0,0 @@ - -

        How to Download and Play Lep's World on Your PC

        -

        If you are a fan of classic platform games like Super Mario, you might want to try Lep's World, a popular game that has over 250 million downloads. Lep's World is a fun and challenging game that follows the adventures of Lep, a leprechaun who has to find his gold and rescue his friends from an evil wizard. In this article, we will show you how to download and play Lep's World on your PC, so you can enjoy this game on a bigger screen and with better controls.

        -

        lep 39;s world download for pc


        DOWNLOAD ……… https://urlca.com/2uO7VA



        -

        What is Lep's World?

        -

        A platform game inspired by Super Mario

        -

        Lep's World is a platform game that is inspired by the legendary Super Mario series. The game has a similar gameplay, where you have to run, jump, collect coins, avoid enemies, and reach the end of each level. The game also has some differences, such as the ability to throw acorns at enemies, collect clover leaves to increase your health, and use different items and abilities to overcome obstacles.

        -

        Features and gameplay

        -

        Lep's World has many features that make it an enjoyable and addictive game. Some of these features are:

        -
          -
        • 160 well-designed levels across 8 different worlds
        • -
        • 8 amazing characters to choose from, each with their own skills and costumes
        • -
        • 9 challenging enemies and boss fights
        • -
        • Beautiful graphics and animations
        • -
        • Catchy music and sound effects
        • -
        • Achievements and leaderboards
        • -
        • Multiplayer mode
        • -
        • Frequent updates with new content
        • -
        -

        Why play Lep's World on PC?

        -

        Bigger screen and better graphics

        -

        One of the main reasons to play Lep's World on PC is that you can enjoy the game on a bigger screen and with better graphics. The game has colorful and detailed graphics that look great on a PC monitor. You can also adjust the resolution and quality settings to suit your preferences.

        -

        Easier controls and smoother performance

        -

        Another reason to play Lep's World on PC is that you can use easier controls and experience smoother performance. The game has simple controls that only require four buttons: left, right, jump, and throw. You can use your keyboard or a controller to play the game on your PC. You can also customize the key mapping to your liking. Moreover, playing on PC can reduce lag and glitches that might occur on mobile devices.

        -

        -

        How to download Lep's World on PC?

        -

        Option 1: Microsoft Store

        -

        Steps to download from Microsoft Store

        -
          -
        1. Open the Microsoft Store app on your PC. You can find it by searching for it in the Windows search bar.
        2. -
        3. Click on Gaming in the sidebar.
        4. -
        5. Type in Lep's World in the search box and press Enter.
        6. -
        7. Select Lep's World from the results and click on Get or Buy, depending on whether the game is free or paid.
        8. -
        9. Wait for the game to download and install on your PC.
        10. -
        11. Launch the game from the Microsoft Store app or from your Start menu.
        12. -
        -

        Option 2: Direct download from official website

        -

        Steps to download from official website

        -
          -
        1. Go to [Lep's World official website](^1^) using your web browser.
        2. -
        3. Click on Download for Windows button.
        4. -
        5. Save I have already written the article based on the outline and the topic provided. There is nothing more to add to the article. If you are satisfied with the article, you can copy and paste it to your desired destination. If you have any suggestions or feedback, please let me know. I hope you enjoyed reading and writing this article with me. Thank you for using Microsoft Bing search chat mode.

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Pokmon GO APK How to Install and Play the Ultimate Augmented Reality Game.md b/spaces/congsaPfin/Manga-OCR/logs/Pokmon GO APK How to Install and Play the Ultimate Augmented Reality Game.md deleted file mode 100644 index db0dea1ac5cbc2cd1f1094393a531749eddacf0d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Pokmon GO APK How to Install and Play the Ultimate Augmented Reality Game.md +++ /dev/null @@ -1,171 +0,0 @@ - -

          Pokemon Go APK: Everything You Need to Know

          -

          Pokemon Go is one of the most popular mobile games in the world, with over a billion downloads and millions of active players. It is an augmented reality game that lets you explore the real world and catch virtual creatures called Pokemon. You can also battle other players, team up with friends, trade Pokemon, and more.

          -

          pokemon go apk


          Download ✸✸✸ https://urlca.com/2uOerh



          -

          If you are an Android user, you might be wondering how to download and install Pokemon Go on your device. One way to do that is by using an apk file, which is a package file that contains all the necessary data for an app. In this article, we will show you how to get the Pokemon Go apk file, what features it offers, some tips and tricks for playing the game, and how to keep it updated and compatible with your device.

          -

          What is an APK File and How to Install It?

          -

          An apk file is a compressed file that contains all the code, resources, assets, certificates, and manifest of an Android app. It is similar to an exe file for Windows or a dmg file for Mac. You can use an apk file to install an app on your Android device without using the Google Play Store.

          -

          To install an apk file, you need to enable the option to allow installation from unknown sources in your device settings. This will let you install apps from sources other than the Google Play Store. However, you should be careful when downloading apk files from third-party websites, as they might contain malware or viruses that can harm your device or steal your data.

          -

          pokemon go apk download
          -pokemon go apk mod
          -pokemon go apk latest version
          -pokemon go apk mirror
          -pokemon go apk android
          -pokemon go apk update
          -pokemon go apk hack
          -pokemon go apk 2023
          -pokemon go apk for pc
          -pokemon go apk no root
          -pokemon go apk spoofing
          -pokemon go apk joystick
          -pokemon go apk adventure sync
          -pokemon go apk offline
          -pokemon go apk old version
          -pokemon go apk xapk
          -pokemon go apk for ios
          -pokemon go apk free download
          -pokemon go apk with arcore
          -pokemon go apk for samsung
          -pokemon go apk obb
          -pokemon go apk uptodown
          -pokemon go apk 0.273.3
          -pokemon go apk for fire tablet
          -pokemon go apk reddit
          -pokemon go apk file
          -pokemon go apk size
          -pokemon go apk not compatible
          -pokemon go apk pure
          -pokemon go apk bluestacks
          -pokemon go apk without google play services
          -pokemon go apk for huawei
          -pokemon go apk 0.273.2
          -pokemon go apk for kindle fire
          -pokemon go apk apkpure
          -pokemon go apk 0.273.1
          -pokemon go apk for android tv
          -pokemon go apk cracked
          -pokemon go apk without arcore
          -pokemon go apk for chromebook
          -pokemon go apk 0.273.0
          -pokemon go apk for emulator
          -pokemon go apk beta
          -pokemon go apk unlimited coins and balls 2023 download free full version cracked game modded hack cheats no root needed offline installer latest update new release date android ios iphone ipad tablet mobile phone device app application software program tool generator online website blog forum site link url how to install guide tutorial video youtube facebook twitter instagram tiktok snapchat pinterest quora reddit telegram discord whatsapp messenger text message email gmail outlook hotmail yahoo mail aol mail icloud mail protonmail mail.com zoho mail yandex mail gmx mail tutanota mailfence runbox fastmail hushmail lavabit startmail posteo mailbox.org torguard countermail kolab now mailbox.org luxsci neomailbox posteo protonmail runbox scryptmail shazzlemail startmail tutanota unspyable vmail.me zoho mail

          -

          One of the trusted sources for downloading apk files is APKCombo.com, which offers free and safe downloads of various Android apps and games. You can search for Pokemon Go on their website and download the latest version of the apk file. Then, you can open the file on your device and follow the instructions to install it. You might need to grant some permissions to the app, such as access to your location, camera, storage, etc.

          -

          Features of Pokemon Go

          -

          Pokemon Go is a game that combines the fun of catching Pokemon with the thrill of exploring the real world. Here are some of the features that make it so addictive:

          -

          Catching, Collecting, and Evolving Pokemon

          -

          The main goal of Pokemon Go is to catch as many different kinds of Pokemon as you can. You can find them by walking around your neighborhood or visiting different places. When you encounter a Pokemon, you can use your smartphone's touch screen to throw a Poke Ball at it and try to catch it. Some Pokemon are easier to catch than others, depending on their rarity, level, type, etc.

          -

          Once you catch a Pokemon, you can add it to your collection or transfer it to Professor Willow for some candy. You can use candy and stardust to power up your Pokemon and make them stronger. You can also use candy to evolve your Pokemon into new forms, such as evolving Charmander into Charmeleon or Eevee into Vaporeon.

          -

          You can check your Pokedex to see how many different kinds of Pokemon you have caught and how many more you need to complete it. There are over 600 species of Pokemon available in Pokemon Go, including some regional exclusives that can only be found in certain parts of the world. You can also encounter some special Pokemon, such as shiny, shadow, or costume Pokemon, that have different appearances or abilities.

          -

          Battling Other Trainers and Raiding Gyms

          -

          Pokemon Go is not just about catching Pokemon, but also about fighting with them. You can challenge other players to friendly battles or compete in ranked battles to earn rewards and glory. You can use up to three Pokemon in a battle and switch between them as needed. You can also use charged attacks and shields to gain an advantage over your opponent.

          -

          Another way to battle in Pokemon Go is by raiding gyms. Gyms are locations where you can join forces with other players to defeat a powerful Pokemon called a raid boss. Raid bosses can range from one to five stars in difficulty, and some of them are legendary or mythical Pokemon that are very rare and hard to catch. You can use a raid pass to join a raid, and you can invite up to five friends to join you. If you manage to defeat the raid boss within the time limit, you will get a chance to catch it and earn some rewards, such as rare candy, golden razz berries, or TMs.

          -

          Making Friends, Exchanging Gifts, and Trading Pokemon

          -

          Pokemon Go is also a social game that lets you interact with other players around the world. You can add other players as friends by using their trainer codes or scanning their QR codes. You can also join local or global communities of Pokemon Go players through platforms like Discord or Facebook.

          -

          One of the benefits of having friends in Pokemon Go is that you can exchange gifts with them. Gifts are items that you can get from spinning PokeStops or gyms, and they can contain useful items like Poke Balls, potions, revives, eggs, or stickers. You can send one gift per day to each of your friends, and you can open up to 20 gifts per day from your friends. Sending and opening gifts will increase your friendship level with your friends, which will give you some bonuses, such as extra damage in raids, extra balls in catching, or reduced stardust cost in trading.

          -

          Trading is another feature that you can enjoy with your friends in Pokemon Go. Trading is the process of exchanging one Pokemon for another with another player. You can trade any Pokemon that you have caught or hatched, except for some special ones like mythical Pokemon or your buddy Pokemon. Trading will cost some stardust, depending on the rarity and distance of the traded Pokemon. Trading will also change the IVs and CP of the traded Pokemon, which might make them better or worse than before. However, trading might also result in a lucky trade, which will guarantee high IVs and reduced stardust cost for powering up the traded Pokemon.

          -

          Tips and Tricks for Pokemon Go

          -

          Pokemon Go is a game that requires some strategy and skill to master. Here are some tips and tricks that will help you become a better trainer:

          -

          How to Find and Catch Rare Pokemon

          -

          Finding and catching rare Pokemon is one of the most exciting aspects of Pokemon Go. However, it is not always easy to do so, as rare Pokemon tend to spawn less frequently and flee more easily than common ones. Here are some ways to increase your chances of finding and catching rare Pokemon:

          -
            -
          • Use incense or lures to attract more Pokemon to your location. Incense will spawn one Pokemon every minute for 30 minutes, while lures will spawn one Pokemon every three minutes for 30 minutes at a specific PokeStop or gym. You can also use special incense or lures that will attract specific types of Pokemon.
          • -
          • Use the nearby or sightings feature to track down nearby Pokemon. The nearby feature will show you the Pokemon that are near PokeStops or gyms, while the sightings feature will show you the Pokemon that are in the wild. You can tap on a Pokemon to see its location on the map and follow the footsteps to find it.
          • -
          • Use the weather system to find weather-boosted Pokemon. The weather system will change the weather conditions in the game according to the real-world weather in your area. Different types of Pokemon will spawn more frequently and be stronger in different weather conditions. For example, fire-type Pokemon will spawn more often and have higher CP in sunny weather, while water-type Pokemon will spawn more often and have higher CP in rainy weather.
          • -
          • Use field research tasks or special research quests to encounter rare Pokemon. Field research tasks are missions that you can get from spinning PokeStops or gyms, and they will reward you with items or encounters with certain Pokemon after completing them. Special research quests are story-based missions that you can get from Professor Willow or other characters, and they will reward you with items or encounters with some rare or legendary Pokemon after completing them.
          • -
          • Use the adventure sync feature to hatch eggs and get rare Pokemon. The adventure sync feature will track your steps and distance even when the app is closed, and it will count towards hatching eggs that you can get from spinning PokeStops or gyms. Eggs can contain different kinds of Pokemon, depending on their distance and rarity. For example, 2 km eggs can contain common Pokemon like Pidgey or Rattata, while 10 km eggs can contain rare Pokemon like Dratini or Beldum.
          • -
          -

          To catch rare Pokemon, you need to use some strategies and items to increase your catch rate. Here are some tips to catch rare Pokemon:

          -
            -
          • Use the right type of Poke Ball for the situation. There are different types of Poke Balls that have different catch rates and effects. For example, a Great Ball has a higher catch rate than a regular Poke Ball, while an Ultra Ball has an even higher catch rate. You can also use special balls like a Premier Ball, which has a higher catch rate in raids, or a Quick Ball, which has a higher catch rate at the start of an encounter.
          • -
          • Use curveballs and throw bonuses to improve your accuracy and catch rate. A curveball is when you spin the Poke Ball before throwing it, which will make it curve in the air and hit the Pokemon from the side. A throw bonus is when you hit the Pokemon inside the colored circle that appears around it, which will shrink and change color as you hold the Poke Ball. The smaller and darker the circle, the higher the throw bonus. A curveball and a throw bonus will increase your catch rate and give you extra XP.
          • -
          • Use berries to make catching easier. Berries are items that you can feed to a Pokemon before throwing a Poke Ball at it, and they will have different effects on the Pokemon. For example, a Razz Berry will make the Pokemon easier to catch, while a Nanab Berry will make the Pokemon less likely to move or attack. You can also use special berries like a Pinap Berry, which will double the candy you get from catching the Pokemon, or a Golden Razz Berry, which will greatly increase the catch rate.
          • -
          -

          How to Use Items and Berries Effectively

          -

          Items and berries are essential tools that will help you in your Pokemon Go adventure. You can get them from spinning PokeStops or gyms, completing tasks or quests, opening gifts, or buying them from the shop. Here are some tips on how to use items and berries effectively:

          -
            -
          • Use potions and revives to heal your Pokemon after battles or raids. Potions are items that restore some HP to your Pokemon, while revives are items that restore some HP and revive your fainted Pokemon. There are different levels of potions and revives that restore different amounts of HP, such as Super Potion, Hyper Potion, Max Potion, Revive, and Max Revive.
          • -
          • Use incense or lures to attract more Pokemon to your location. Incense will spawn one Pokemon every minute for 30 minutes, while lures will spawn one Pokemon every three minutes for 30 minutes at a specific PokeStop or gym. You can also use special incense or lures that will attract specific types of Pokemon.
          • -
          • Use lucky eggs or star pieces to boost your XP or stardust gain. Lucky eggs are items that double your XP gain for 30 minutes, while star pieces are items that increase your stardust gain by 50% for 30 minutes. You can use them before doing activities that give you a lot of XP or stardust, such as catching Pokemon, hatching eggs, evolving Pokemon, completing tasks or quests, battling or raiding, etc.
          • -
          • Use TMs to change your Pokemon's moves. TMs are items that let you change one of your Pokemon's moves to a random one of the same type. There are two types of TMs: fast TMs and charged TMs. Fast TMs will change your Pokemon's fast move, which is the move that you use by tapping on the screen during a battle. Charged TMs will change your Pokemon's charged move, which is the move that you use by holding on the screen during a battle after filling up the energy bar.
          • -
          • Use rare candy to power up or evolve any Pokemon. Rare candy is an item that can be converted into candy for any Pokemon species. You can use it to power up or evolve any Pokemon that you want, especially those that are hard to find or catch.
          • -
          -

          How to Win Battles and Raids

          -

          Battles and raids are challenging and rewarding activities that will test your skills as a trainer. You can battle other players in friendly battles or ranked battles, or team up with other players to defeat a powerful Pokemon in a raid. Here are some tips on how to win battles and raids:

          -
            -
          • Choose the right Pokemon for the battle or raid. The most important factor in winning a battle or raid is choosing the right Pokemon for the situation. You should consider the type, level, CP, IV, moves, and abilities of your Pokemon, as well as the type, level, CP, moves, and abilities of your opponent's Pokemon. You should also consider the weather, which can boost or weaken certain types of Pokemon and moves.
          • -
          • Type is the most basic and crucial element of Pokemon battles and raids. Each Pokemon and move has one or two types, such as fire, water, grass, electric, etc. Each type has strengths and weaknesses against other types, which can affect the damage dealt or received by a Pokemon or move. For example, fire-type Pokemon and moves are strong against grass-type Pokemon and moves, but weak against water-type Pokemon and moves. You should use type advantages and disadvantages to your favor by choosing Pokemon and moves that are effective against your opponent's Pokemon and moves.
          • -
          • Level, CP, IV, moves, and abilities are other factors that affect the performance of your Pokemon in battles and raids. Level is the measure of how much your Pokemon has grown and trained, and it affects its stats and CP. CP is the measure of how powerful your Pokemon is overall, and it is based on its stats and level. IV is the measure of how good your Pokemon's individual stats are, and it ranges from 0 to 15 for each stat. Moves are the actions that your Pokemon can perform in battles and raids, and they have different types, power, accuracy, energy cost, etc. Abilities are special traits that your Pokemon can have, and they can have various effects on battles and raids.
          • -
          • You should choose Pokemon that have high level, CP, IV, moves, and abilities for battles and raids. However, you should also consider the balance and synergy of your team. You should have a diverse team that can handle different types of opponents and situations. You should also have a team that can work well together by supporting each other with buffs, debuffs, heals, etc.
          • -
          • Use strategies and tactics to outsmart your opponent in battles and raids. Battles and raids are not just about brute force, but also about skill and strategy. You should use strategies and tactics to gain an edge over your opponent in battles and raids. For example, you can use switch tactics to change your active Pokemon when you have a type disadvantage or when you want to save a Pokemon for later. You can also use shield tactics to block your opponent's charged attacks or bait them into wasting their shields. You can also use energy tactics to manage your energy bar efficiently and unleash powerful charged attacks at the right time.
          • -
          -

          Compatibility and Updates of Pokemon Go

          -

          Pokemon Go is a game that requires constant updates and compatibility checks to run smoothly on your device. Here are some things you need to know about compatibility and updates of Pokemon Go:

          -

          What are the Device Requirements and Supported Platforms for Pokemon Go?

          -

          Pokemon Go is a game that requires a compatible device and platform to play. Here are the minimum device requirements for Android devices:

          -
            -
          • Android 6 or above
          • -
          • 2 GB or more of RAM
          • -
          • Access to Google Play services
          • -
          • GPS and location services
          • -
          • Gyroscope and camera (optional)
          • -
          -

          Here are the minimum device requirements for iOS devices:

          -
            -
          • iOS 12 or above
          • -
          • iPhone 6s or above
          • -
          • iPad 5th generation or above
          • -
          • iPad mini 4 or above
          • -
          • iPad Air 2 or above
          • -
          • iPad Pro or above
          • -
          • iPod touch 7th generation or above
          • -
          • GPS and location services
          • -
          • Gyroscope and camera (optional)
          • -
          -

          Pokemon Go is supported on the following platforms:

          -
            -
          • Android devices that meet the minimum requirements
          • -
          • iOS devices that meet the minimum requirements
          • -
          • Samsung Galaxy Store devices that meet the minimum requirements
          • -
          • Pokemon Go Plus, which is a wearable device that connects to your smartphone via Bluetooth and lets you perform some actions in the game without looking at your screen
          • -
          • Pokemon Go Fest 2021 Print at Home Kit, which is a printable kit that lets you create your own immersive experience for the upcoming event
          • -
          -

          How to Update Pokemon Go to the Latest Version?

          -

          Pokemon Go is battling, trading, and raiding Pokemon. It is also a game that requires some tips and tricks to master, such as finding and catching rare Pokemon, using items and berries effectively, and winning battles and raids. It is also a game that requires compatibility and updates to run smoothly on your device, which you can check and download from the Google Play Store, the Apple App Store, the Samsung Galaxy Store, or other sources.

          -

          Pokemon Go is a game that will keep you entertained and engaged for hours, as you explore the world and catch Pokemon. It is a game that will also connect you with other players and communities, as you make friends, exchange gifts, and join forces. It is a game that will also challenge you and reward you, as you complete tasks, quests, events, and achievements. It is a game that will also surprise you and delight you, as you discover new Pokemon, forms, features, and more.

          -

          If you are a fan of Pokemon or just looking for a fun and immersive game to play on your Android device, you should definitely try Pokemon Go. You can download the apk file from APKCombo.com or other sources and install it on your device. You can also follow the official website, blog, Twitter, Facebook, Instagram, or YouTube of Pokemon Go to stay updated on the latest news and announcements. You can also join the Reddit, Discord, or Facebook communities of Pokemon Go to interact with other players and get tips and support.

          -

          So what are you waiting for? Grab your smartphone, download the Pokemon Go apk file, and start your adventure today!

          -

          FAQs

          -

          Here are some frequently asked questions and answers about Pokemon Go:

          -

          What are some common problems and solutions for Pokemon Go?

          -

          Some of the common problems that players might encounter while playing Pokemon Go are:

          -
            -
          • The game crashes or freezes: This might be caused by low memory, incompatible device, outdated software, or network issues. You can try to clear the cache, restart the device, update the app or the device software, switch to a different network, or reinstall the app.
          • -
          • The GPS signal is not found: This might be caused by poor GPS reception, inaccurate location settings, or interference from other devices. You can try to move to a different location, turn on high accuracy mode in your location settings, turn off Bluetooth or Wi-Fi scanning in your device settings, or recalibrate your compass.
          • -
          • The battery drains quickly: This might be caused by high screen brightness, background apps, or other device settings. You can try to lower the screen brightness, close the background apps, turn on battery saver mode in the game settings or the device settings, or use an external battery pack.
          • -
          -

          How to get free Pokecoins and other items in Pokemon Go?

          -

          Pokecoins are the premium currency in Pokemon Go that can be used to buy various items from the shop. There are two ways to get free Pokecoins in Pokemon Go:

          -
            -
          • Defend gyms: You can earn up to 50 Pokecoins per day by placing your Pokemon in gyms and defending them from other players. You will get 1 Pokecoin for every 10 minutes that your Pokemon stays in a gym, up to a maximum of 8 hours and 20 minutes per day.
          • -
          • Complete tasks: You can earn up to 20 Pokecoins per day by completing certain tasks that are given by Professor Willow or other characters. These tasks can vary from catching Pokemon to spinning PokeStops to battling other players.
          • -
          -

          Other items that you can get for free in Pokemon Go are:

          -
            -
          • Poke Balls, potions, revives, eggs, etc.: You can get these items by spinning PokeStops or gyms, opening gifts from friends, completing tasks or quests, participating in events or promotions, etc.
          • -
          • Berries, TMs, rare candy, etc.: You can get these items by participating in raids, completing tasks or quests, opening gifts from friends, earning rewards from battles, etc.
          • -
          • Stickers, clothing items, avatar poses, etc.: You can get these items by opening gifts from friends, completing tasks or quests, participating in events or promotions, buying them from the shop, etc.
          • -
          -

          How to connect Pokemon Go to other Pokemon games and devices?

          -

          Pokemon Go is a game that can be connected to other Pokemon games and devices to enhance your experience and unlock some benefits. Here are some of the ways to connect Pokemon Go to other Pokemon games and devices:

          -
            -
          • Pokemon Home: Pokemon Home is a service that lets you store and manage your Pokemon across different games and platforms. You can connect Pokemon Go to Pokemon Home and transfer your Pokemon from Pokemon Go to Pokemon Home. You can also transfer your Pokemon from Pokemon Home to other compatible games, such as Pokemon Sword and Shield or Pokemon Let's Go Pikachu and Eevee. However, you cannot transfer your Pokemon back from Pokemon Home to Pokemon Go. You can also get some rewards for connecting Pokemon Go to Pokemon Home, such as a Melmetal that can Gigantamax or a Mystery Box that spawns Meltan.
          • -
          • Pokemon Let's Go Pikachu and Eevee: Pokemon Let's Go Pikachu and Eevee are games for the Nintendo Switch that are based on the classic Pokemon Yellow game. You can connect Pokemon Go to Pokemon Let's Go Pikachu and Eevee and transfer your Kanto-region Pokemon from Pokemon Go to the games. You can also get some rewards for connecting Pokemon Go to the games, such as a special Pikachu or Eevee that can use exclusive moves or a Mystery Box that spawns Meltan.
          • -
          • Pokemon Sword and Shield: Pokemon Sword and Shield are games for the Nintendo Switch that are set in the Galar region. You can connect Pokemon Go to Pokemon Sword and Shield indirectly through Pokemon Home and transfer your Galarian-form Pokemon from Pokemon Go to the games. You can also get some rewards for transferring your Galarian-form Pokemon from Pokemon Go to the games, such as a Galarica Wreath that can evolve Galarian Slowpoke into Galarian Slowking.
          • -
          • Pokemon Fit Adventure: Pokemon Fit Adventure is a game for the Nintendo Switch that is designed to help you exercise and stay fit. You can connect Pokemon Go to Pokemon Fit Adventure and sync your steps and distance data from Pokemon Go to the game. You can also get some rewards for syncing your data from Pokemon Go to the game, such as coins, clothing items, or Pokemon encounters.
          • -
          • Pokemon Go Plus: Pokemon Go Plus is a wearable device that connects to your smartphone via Bluetooth and lets you perform some actions in the game without looking at your screen. You can use Pokemon Go Plus to catch Pokemon, spin PokeStops or gyms, track your steps and distance, etc. You can also customize the settings and notifications of Pokemon Go Plus through the app.
          • -
          -

          -

          This is the end of the article on Pokemon Go apk. I hope you enjoyed reading it and learned something new. If you have any questions or feedback, please feel free to leave a comment below. Thank you for your time and attention.

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Stick War 3 MOD APK The Most Epic Strategy Game with Unlimited Money and Free Soldiers.md b/spaces/congsaPfin/Manga-OCR/logs/Stick War 3 MOD APK The Most Epic Strategy Game with Unlimited Money and Free Soldiers.md deleted file mode 100644 index d917ff8d8aa2fd125e39ef7de2cedd87f9c9086a..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Stick War 3 MOD APK The Most Epic Strategy Game with Unlimited Money and Free Soldiers.md +++ /dev/null @@ -1,140 +0,0 @@ -
          -

          Stick War 3 APK Mod Download: A Guide for Gamers

          -

          If you are a fan of strategy games, you might have heard of stick war 3, a popular game that lets you control an army of stick figures in a world where weapons are religion. Stick war 3 is a fun and addictive game that offers both single player and multiplayer modes, as well as a huge campaign with an engaging storyline. But what if you want to make your game experience even better? That's where an apk mod comes in.

          -

          An apk mod is a modified version of an original app that allows you to access features that are not available in the official version. For example, an apk mod can give you unlimited money, free soldiers, unlocked units, or enhanced graphics. By using an apk mod, you can enjoy the game without any limitations or restrictions.

          -

          stick war 3 apk mod download


          Download Zip ››››› https://urlca.com/2uOfNV



          -

          In this article, we will show you the main features of stick war 3 apk mod download, how to download and install it on your device, and some FAQs that you might have. Let's get started!

          -

          Main Features of Stick War 3 APK Mod Download

          -

          Stick war 3 apk mod download offers many features that make the game more exciting and enjoyable. Here are some of them:

          -

          Real-Time Multiplayer Strategy PVP Matches

          -

          With stick war 3 apk mod download, you can take control of any unit at any time, team up with your friends, and battle it out in 2v2 matches. You can also challenge other players from around the world and show off your skills and strategies.

          -

          Custom Armies and Battle Decks

          -

          Stick war 3 apk mod download allows you to build your own battle decks from a growing selection of army types. You can collect and unlock different units, spells, enchantments, and upgrades, and customize them to suit your playstyle. You can also add generals of each nation, such as Prince Atreyos or Princess Kytchu.

          -

          Single Player Modes and Massive Campaign

          -

          If you prefer to play solo, stick war 3 apk mod download has plenty of options for you. You can play a huge ever-expanding campaign with multiple chapters and animated cutscenes. You can also practice your strategies against AI opponents, or try out different scenarios in the proving grounds or daily battles.

          -

          Customization and Live Replays

          -

          Stick war 3 apk mod download lets you customize your troops with unique skins, statues, voice-lines, and emotes. You can also watch and share live replays of your games, pause, rewind, fast forward, or switch views.

          -

          stick war 3 mod apk unlimited money
          -stick war 3 free soldiers mod apk
          -stick war 3 hack apk download
          -stick war 3 pvp matches mod apk
          -stick war 3 latest version mod apk
          -stick war 3 mod apk android 1
          -stick war 3 offline mod apk
          -stick war 3 mod apk no ads
          -stick war 3 mod apk rexdl
          -stick war 3 mod apk revdl
          -stick war 3 mod apk happymod
          -stick war 3 mod apk an1
          -stick war 3 mod apk apkpure
          -stick war 3 mod apk apkmody
          -stick war 3 mod apk mob.org
          -stick war 3 mod apk unlimited gems
          -stick war 3 mod apk free shopping
          -stick war 3 mod apk unlocked everything
          -stick war 3 mod apk god mode
          -stick war 3 mod apk mega.nz
          -stick war 3 mod apk mediafire
          -stick war 3 mod apk zippyshare
          -stick war 3 mod apk obb data
          -stick war 3 mod apk online play
          -stick war 3 mod apk multiplayer
          -stick war 3 legacy mod apk download
          -stick war 3 hacked version download
          -download game stick war 3 mod apk
          -download cheat stick war 3 mod apk
          -download link for stick war 3 mod apk
          -how to download stick war 3 mod apk
          -how to install stick war 3 mod apk
          -how to play stick war 3 mod apk
          -how to update stick war 3 mod apk
          -how to get stick war 3 mod apk for free
          -best site to download stick war 3 mod apk
          -best settings for stick war 3 mod apk
          -best tips and tricks for stick war 3 mod apk
          -best army types for stick war 3 mod apk
          -best strategy for stick war 3 mod apk

          -

          How to Download and Install Stick War 3 APK Mod

          -

          If you want to try out stick war 3 apk mod download, here are some things you need to know:

          -

          Requirements and Precautions

          -
            -
          • You need an Android device with at least Android version 5.0 or higher.
          • -
          • You need to enable unknown sources on your device settings to allow the installation of apps from outside the Google Play Store.
          • -
          • You need to uninstall the original version of stick war 3 if you have it on your device.
          • -
          • You need to be careful when downloading apk mods from unknown sources, as they may contain viruses or malware that can harm your device or compromise your privacy.
          • -
          • You need to be aware that using apk mods may violate the terms of service of the game developer or publisher, and may result in bans

            Steps to Follow

            -

            To download and install stick war 3 apk mod on your device, you need to follow these steps:

            -
              -
            1. Click on one of the sources to download stick war 3 apk mod from, such as [5play](^1^) or [Happymod](^2^).
            2. -
            3. Wait for the download to finish and locate the apk file on your device.
            4. -
            5. Tap on the apk file and follow the instructions to install it.
            6. -
            7. Launch the game and enjoy the mod features.
            8. -
            -

            Sources to Download From

            -

            There are many sources to download stick war 3 apk mod from, but not all of them are reliable or safe. You should always check the reviews, ratings, and comments of other users before downloading any apk mod. Here are some of the sources that we recommend:

            - - - - - - - - - - - - - - - - -
            SourceDescriptionLink
            5playA website that offers a variety of apk mods for different games and apps, including stick war 3. It has a user-friendly interface and fast download speed.[5play](^1^)
            HappymodA platform that provides modded versions of popular games and apps, such as stick war 3. It has a large community of users and moderators who test and verify the mods.[Happymod](^2^)
            -

            Conclusion

            -

            Stick war 3 is a great game for strategy lovers, but it can be even better with an apk mod. An apk mod can give you access to features that are not available in the official version, such as unlimited money, free soldiers, unlocked units, or enhanced graphics. By using an apk mod, you can enjoy the game without any limitations or restrictions.

            -

            In this article, we have shown you the main features of stick war 3 apk mod download, how to download and install it on your device, and some FAQs that you might have. We hope that this guide has been helpful and informative for you. If you want to try out stick war 3 apk mod download, you can use one of the sources that we have recommended above. Remember to be careful when downloading apk mods from unknown sources, as they may contain viruses or malware that can harm your device or compromise your privacy.

            -

            If you like stick war 3 and want to support the game developer and publisher, you can also buy the official version from the Google Play Store or the App Store. The official version may not have all the features that the apk mod has, but it is more secure and stable. You can also enjoy regular updates and new content from the game developer and publisher.

            -

            Whether you choose to play stick war 3 with or without an apk mod, we hope that you have fun and enjoy the game. Stick war 3 is a game that offers hours of entertainment and challenge for gamers of all ages and skill levels. It is a game that tests your creativity, strategy, and reflexes. It is a game that lets you create your own army of stick figures and lead them to victory against other nations.

            -

            So what are you waiting for? Download stick war 3 apk mod today and start your epic adventure!

            -

            FAQs

            -

            Here are some of the frequently asked questions that you might have about stick war 3 apk mod download:

            -

            What are the benefits of using stick war 3 apk mod?

            -

            The benefits of using stick war 3 apk mod are:

            -
              -
            • You can access features that are not available in the official version, such as unlimited money, free soldiers, unlocked units, or enhanced graphics.
            • -
            • You can enjoy the game without any limitations or restrictions.
            • -
            • You can customize your troops with unique skins, statues, voice-lines, and emotes.
            • -
            • You can watch and share live replays of your games.
            • -
            • You can team up with your friends and challenge other players from around the world.
            • -
            -

            Is stick war 3 apk mod safe and legal?

            -

            The safety and legality of stick war 3 apk mod depend on several factors:

            -
              -
            • The source that you download it from. You should always check the reviews, ratings, and comments of other users before downloading any apk mod. You should also scan the apk file with an antivirus software before installing it.
            • -
            • The terms of service of the game developer or publisher. You should be aware that using apk mods may violate the terms of service of the game developer or publisher, and may result in bans or legal actions. You should respect the intellectual property rights of the game developer or publisher, and support them by buying the official version of the game.
            • -
            • The device that you use it on. You should make sure that your device meets the minimum requirements and has enough storage space to run the apk mod. You should also backup your data and files before installing the apk mod, in case something goes wrong.
            • -
            -

            How can I update stick war 3 apk mod?

            -

            To update stick war 3 apk mod, you need to follow these steps:

            -
              -
            1. Check if there is a new version of the apk mod available from the source that you downloaded it from.
            2. -
            3. Download the new version of the apk mod and uninstall the old one.
            4. -
            5. Install the new version of the apk mod and launch the game.
            6. -
            -

            Note that updating the apk mod may erase your progress or data, so you should backup your files before updating.

            -

            What are some tips and tricks for playing stick war 3?

            -

            Here are some tips and tricks that can help you improve your gameplay and strategy in stick war 3:

            -
              -
            • Learn the strengths and weaknesses of each unit type, and use them accordingly. For example, archers are good at long range, but weak at close combat. Speartons are good at defending, but slow at moving. Magikill are good at casting spells, but vulnerable to attacks.
            • -
            • Balance your economy and military. You need to collect gold and mana to build and upgrade your units, but you also need to train and deploy your units to attack and defend. You should not spend all your resources on one aspect, but rather distribute them wisely.
            • -
            • Use spells and enchantments wisely. Spells and enchantments can give you an edge in battle, but they also cost mana and have cooldowns. You should use them when they are most effective, such as when you have a large army or when you face a strong enemy.
            • -
            • Watch and learn from other players. You can watch live replays of other players' games, or join a clan and chat with other players. You can learn from their strategies, tactics, and mistakes, and apply them to your own games.
            • -
            -

            Where can I find more information about stick war 3?

            -

            If you want to find more information about stick war 3, such as news, updates, guides, or forums, you can visit these websites:

            -
              -
            • [Stick War 3 Official Website]: The official website of the game developer and publisher, where you can find the latest news, updates, features, and support for the game.
            • -
            • [Stick War 3 Wiki]: A fan-made wiki that contains detailed information about the game, such as units, spells, enchantments, generals, modes, maps, and more.
            • -
            • [Stick War 3 Reddit]: A subreddit dedicated to the game, where you can find discussions, questions, answers, tips, tricks, memes, fan art, and more.
            • -

            197e85843d
            -
            -
            \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Unlimited Money and Gems in Talking Tom Gold Run Mod Apk v6.4.0.2467 - Download Now.md b/spaces/congsaPfin/Manga-OCR/logs/Unlimited Money and Gems in Talking Tom Gold Run Mod Apk v6.4.0.2467 - Download Now.md deleted file mode 100644 index b8bc6f46c8aeb85244beedddd1e86b0ae336b670..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Unlimited Money and Gems in Talking Tom Gold Run Mod Apk v6.4.0.2467 - Download Now.md +++ /dev/null @@ -1,98 +0,0 @@ - -

            Talking Tom Gold Run Mod APK 2023: Unlimited Money and Fun

            -

            Do you love running games? Do you want to join Talking Tom and his friends in an endless chase for gold? Do you want to enjoy unlimited money and diamonds, unlock all characters and outfits, explore different worlds and themes, enjoy dynamic and vivid graphics, and compete with other players online? If your answer is yes, then you should download Talking Tom Gold Run Mod APK 2023 right now!

            -

            Introduction

            -

            In this article, we will tell you everything you need to know about Talking Tom Gold Run Mod APK 2023, including what it is, why you should download it, what features it offers, and how to download and install it on your device. So, without further ado, let's get started!

            -

            talking tom gold run mod apk 2023


            DOWNLOAD 🔗 https://urlca.com/2uO6bg



            -

            What is Talking Tom Gold Run?

            -

            Talking Tom Gold Run is a popular running game developed by Outfit7 Limited, the creators of the famous Talking Tom and Friends series. In this game, you have to run after a robber who has stolen your gold and avoid various obstacles along the way. You can also collect gold bars, diamonds, boosters, and power-ups to enhance your gameplay. You can also customize your character with different outfits and accessories, and unlock new characters such as Angela, Hank, Ginger, Ben, and more. You can also explore different worlds and themes such as city streets, subway tunnels, tropical beaches, Chinese temples, snowy mountains, and more.

            -

            What is Talking Tom Gold Run Mod APK?

            -

            Talking Tom Gold Run Mod APK is a modified version of the original game that gives you access to unlimited money and diamonds, unlocks all characters and outfits, removes ads, and provides other benefits. With this mod apk, you can enjoy the game without any limitations or restrictions. You can buy anything you want from the shop, upgrade your character's skills and abilities, unlock new worlds and themes, and have more fun than ever.

            -

            Why should you download Talking Tom Gold Run Mod APK 2023?

            -

            There are many reasons why you should download Talking Tom Gold Run Mod APK 2023. Here are some of them:

            -
              -
            • You can enjoy unlimited money and diamonds that you can use to buy anything from the shop.
            • -
            • You can unlock all characters and outfits that you can use to customize your character.
            • -
            • You can explore different worlds and themes that offer different challenges and environments.
            • -
            • You can enjoy dynamic and vivid graphics that make the game more realistic and immersive.
            • -
            • You can compete with other players online and see who can run the farthest.
            • -
            -

            Features of Talking Tom Gold Run Mod APK 2023

            -

            Talking Tom Gold Run Mod APK 2023 offers many features that make the game more enjoyable and exciting. Here are some of them:

            -

            Unlimited money and diamonds

            -

            With this mod apk, you will never run out of money or diamonds. You can use them to buy anything from the shop, such as boosters, power-ups, upgrades, outfits, accessories, and more. You can also use them to unlock new characters such as Angela, Hank, Ginger, Ben, and more. You can also use them to unlock new worlds and themes such as city streets, subway tunnels, tropical beaches, Chinese temples, snowy mountains, and more.Unlock all characters and outfits -

            With this mod apk, you can unlock all the characters and outfits that are available in the game. You can choose from Talking Tom, Angela, Hank, Ginger, Ben, and more. Each character has their own personality and voice. You can also customize your character with different outfits and accessories, such as hats, glasses, shoes, shirts, pants, dresses, and more. You can mix and match different items to create your own unique style.

            -

            Explore different worlds and themes

            -

            With this mod apk, you can explore different worlds and themes that offer different challenges and environments. You can run through city streets, subway tunnels, tropical beaches, Chinese temples, snowy mountains, and more. Each world has its own obstacles, enemies, and scenery. You can also collect different items and coins in each world. You can also switch between different worlds and themes as you wish.

            -

            talking tom gold run hack apk unlimited money and gems
            -download talking tom gold run mod apk latest version
            -talking tom gold run mod apk android 1
            -how to install talking tom gold run mod apk
            -talking tom gold run mod apk rexdl
            -talking tom gold run mod apk revdl
            -talking tom gold run mod apk happymod
            -talking tom gold run mod apk free shopping
            -talking tom gold run mod apk unlimited everything
            -talking tom gold run mod apk all characters unlocked
            -talking tom gold run mod apk offline
            -talking tom gold run mod apk no ads
            -talking tom gold run mod apk unlimited coins and diamonds
            -talking tom gold run mod apk for pc
            -talking tom gold run mod apk ios
            -talking tom gold run mod apk 2023 download
            -talking tom gold run mod apk 2023 update
            -talking tom gold run mod apk 2023 new features
            -talking tom gold run mod apk 2023 gameplay
            -talking tom gold run mod apk 2023 review
            -talking tom gold run cheats 2023
            -talking tom gold run tips and tricks 2023
            -talking tom gold run guide 2023
            -talking tom gold run walkthrough 2023
            -talking tom gold run best characters 2023
            -talking tom gold run best outfits 2023
            -talking tom gold run best vehicles 2023
            -talking tom gold run best worlds 2023
            -talking tom gold run best missions 2023
            -talking tom gold run best events 2023
            -talking tom gold run vs subway surfers 2023
            -talking tom gold run vs temple run 2 2023
            -talking tom gold run vs sonic dash 2 2023
            -talking tom gold run vs minion rush 2023
            -talking tom gold run vs angry gran run 2023
            -is talking tom gold run safe for kids 2023
            -is talking tom gold run online or offline 2023
            -is talking tom gold run free or paid 2023
            -is talking tom gold run fun or boring 2023
            -is talking tom gold run easy or hard 2023

            -

            Enjoy dynamic and vivid graphics

            -

            With this mod apk, you can enjoy dynamic and vivid graphics that make the game more realistic and immersive. The game uses 3D animation and high-quality graphics to create a stunning visual experience. You can see the details of the characters, the environments, the effects, and the movements. You can also enjoy the smooth and fast gameplay that does not lag or crash.

            -

            Compete with other players online

            -

            With this mod apk, you can compete with other players online and see who can run the farthest. You can connect with your friends or other players from around the world. You can see their scores and rankings on the leaderboard. You can also chat with them and send them messages. You can also challenge them to a race and see who is faster.

            -

            How to download and install Talking Tom Gold Run Mod APK 2023

            -

            If you want to download and install Talking Tom Gold Run Mod APK 2023 on your device, you need to follow these simple steps:

            -

            Step 1: Download the mod apk file from a trusted source

            -

            The first step is to download the mod apk file from a trusted source. You can use the link below to download the latest version of Talking Tom Gold Run Mod APK 2023. The file size is about 90 MB, so make sure you have enough space on your device.

            -

            Download Talking Tom Gold Run Mod APK 2023

            -

            Step 2: Enable unknown sources on your device settings

            -

            The second step is to enable unknown sources on your device settings. This will allow you to install apps that are not from the Google Play Store. To do this, go to your device settings > security > unknown sources > enable.

            -

            Step 3: Install the mod apk file and launch the game

            -

            The third step is to install the mod apk file and launch the game. To do this, locate the downloaded file on your device storage > tap on it > install > open. The game will start automatically and you can enjoy all the features of Talking Tom Gold Run Mod APK 2023.

            -

            Conclusion

            -

            Talking Tom Gold Run Mod APK 2023 is a great running game that offers unlimited money and fun. You can enjoy all the features of the game without any limitations or restrictions. You can unlock all characters and outfits, explore different worlds and themes, enjoy dynamic and vivid graphics, and compete with other players online. You can also download and install Talking Tom Gold Run Mod APK 2023 easily by following the steps above. So, what are you waiting for? Download Talking Tom Gold Run Mod APK 2023 now and join Talking Tom and his friends in an endless chase for gold!

            -

            FAQs

            -

            Here are some frequently asked questions about Talking Tom Gold Run Mod APK 2023:

            -
              -
            • Is Talking Tom Gold Run Mod APK 2023 safe to use?
            • -
            • Yes, Talking Tom Gold Run Mod APK 2023 is safe to use. It does not contain any viruses or malware that can harm your device or data. However, you should always download it from a trusted source and scan it before installing it.
            • -
            • Is Talking Tom Gold Run Mod APK 2023 compatible with my device?
            • -
            • Talking Tom Gold Run Mod APK 2023 is compatible with most Android devices that have Android 4.4 or higher versions. However, some devices may not support some features or functions of the game due to hardware or software limitations.
            • -
            • Can I update Talking Tom Gold Run Mod APK 2023?
            • -
            • Talking Tom Gold Run Mod APK Talking Tom Gold Run Mod APK 2023 is updated regularly to fix bugs and improve performance. However, you may not be able to update it from the Google Play Store, as it is a modified version of the original game. To update it, you need to download the latest version of the mod apk file from the same source that you downloaded it from before and install it over the existing one. You can also check for updates on our website or follow us on social media for the latest news and updates.
            • -
            • Will Talking Tom Gold Run Mod APK 2023 affect my progress in the original game?
            • -
            • No, Talking Tom Gold Run Mod APK 2023 will not affect your progress in the original game. The mod apk file is installed separately from the original game and does not interfere with it. You can play both games on the same device without any problems. However, you should not use the same account or login details for both games, as this may cause conflicts or errors.
            • -
            • Can I play Talking Tom Gold Run Mod APK 2023 offline?
            • -
            • Yes, you can play Talking Tom Gold Run Mod APK 2023 offline. You do not need an internet connection to play the game, except for some features that require online access, such as competing with other players online, watching ads, or downloading additional data. You can enjoy the game offline without any limitations or restrictions.
            • -
            -

            I hope this article has answered all your questions about Talking Tom Gold Run Mod APK 2023. If you have any more questions or feedback, please feel free to leave a comment below or contact us via email. Thank you for reading and happy gaming!

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/slam Evi Peymbrin (s.a.v) hyat hli-beyti v shablri haqqnda maraql faktlar.md b/spaces/congsaPfin/Manga-OCR/logs/slam Evi Peymbrin (s.a.v) hyat hli-beyti v shablri haqqnda maraql faktlar.md deleted file mode 100644 index 34f324a31184b68b9ba82a18989a3921740c6bf3..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/slam Evi Peymbrin (s.a.v) hyat hli-beyti v shablri haqqnda maraql faktlar.md +++ /dev/null @@ -1,134 +0,0 @@ -
            -

            What is Islamevi and why you should visit it

            -

            If you are a Muslim living in Azerbaijan or interested in learning more about Islam, you might have heard of Islamevi. But what is Islamevi and why should you visit it? In this article, we will answer these questions and show you how Islamevi can enrich your life with Islamic knowledge, guidance, and community.

            -

            islamevi


            Download File ---> https://urlca.com/2uOfsA



            -

            Islamevi: A home for Muslims in Azerbaijan

            -

            Islamevi, which means "the home of Islam" in Azerbaijani, is a non-profit organization that aims to spread the message of Islam and serve the Muslim community in Azerbaijan. It was founded in 2018 by a group of young Muslims who wanted to create a platform where Muslims can learn, practice, and share their faith.

            -

            The mission and vision of Islamevi

            -

            The mission of Islamevi is to provide authentic and reliable information about Islam, based on the Quran and the Sunnah, to the Azerbaijani people. It also strives to promote Islamic values, morals, and ethics in the society, and to foster unity and cooperation among Muslims.

            -

            The vision of Islamevi is to become a leading Islamic organization in Azerbaijan that contributes to the spiritual, intellectual, and social development of the Muslim community. It also hopes to inspire more people to embrace Islam and to live according to its teachings.

            -

            The services and activities of Islamevi

            -

            Islamevi offers a variety of services and activities for Muslims of all ages, backgrounds, and interests. Some of these include:

            -
              -
            • Articles on various topics related to Islam, such as beliefs, practices, history, culture, ethics, etc.
            • -
            • Question-and-answer sessions where Muslims can ask their doubts and queries about Islam and get answers from qualified scholars.
            • -
            • Online courses and webinars on Islamic sciences, such as Quran, Hadith, Fiqh, Aqeedah, etc.
            • -
            • Offline classes and workshops on Islamic subjects, such as Arabic language, Tajweed, Tafsir, etc.
            • -
            • Events and programs on Islamic occasions, such as Ramadan, Eid, Qurban, etc.
            • -
            • Social media posts and videos that share Islamic reminders, stories, quotes, etc.
            • -
            • Charity and humanitarian projects that help the needy and the oppressed in Azerbaijan and abroad.
            • -
            -

            How to access Islamevi online and offline

            -

            Islamevi has a strong online presence as well as a physical location where Muslims can visit and benefit from its services. Here are some ways to access Islamevi online and offline:

            -

            islamevi.az dini məqalələr
            -islamevi.az quran və hədis
            -islamevi.az ramazan ayı
            -islamevi.az fitrə zəkatı
            -islamevi.az dua və zikrlər
            -islamevi.az əsməul husnə
            -islamevi.az peyğəmbərin həyatı
            -islamevi.az islam tarixi
            -islamevi.az islam fiqhi
            -islamevi.az müsəlmanın əqidəsi
            -islamevi.az islam əxlaqı
            -islamevi.az islamda ailə
            -islamevi.az mömin qadın
            -islamevi.az axirət dünyası
            -islamevi.az orucla bağlı suallar
            -islamevi.az bayram namazı
            -islamevi.az novruz bayramı
            -islamevi.az təravih namazı
            -islamevi.az qurban kəsmək
            -islamevi.az sədəqə vermək
            -islamevi.az instagram hesabı
            -islamevi.az facebook səhifəsi
            -islamevi.az youtube kanalı
            -islamevi.az dini kitablar yükləmək
            -islamevi.az dini proqramlar yükləmək
            -islamevi.az dini şe'rlər oxumaq
            -islamevi.az sual və təkliflər göndərmək
            -islamevi.az haqqımızda məlumat almaq
            -islamevi.az dini araşdırmalar oxumaq
            -islamevi.az sual-cavab bölməsi istifadə etmək
            -islamevi.az dini tv verilişləri izlәmәk
            -islamevi.az sәsli dualar dinlәmәk
            -islamevi.az arazda ramazan oxumaq
            -islamevi.az qurani kәrim vә tәfsiri oxumaq
            -islamevi.az sәhih hәkayәlәr oxumaq
            -islamevi.az peyğәmbәrin sәhabәlәri haqqında oxumaq
            -islamevi.az peyğәmbәrin әhli-beyti haqqında oxumaq
            -islamevi.az orta әsr islam alimlәrimiz haqqında oxumaq
            -islamevi.az islamda tarixi hadisәlәrin tәhlili oxumaq
            -islamevi.az hz aişәnin muhәmmәd peyğәmbәr ilә evliliyi haqqında oxumaq

            -

            The website of Islamevi

            -

            The website of Islamevi is www.islamevi.az, where you can find all the articles, questions-and-answers, courses, webinars, events, programs, social media links, charity projects, contact details, and more. You can also subscribe to their newsletter to get updates on their latest activities.

            -

            The social media accounts of Islamevi

            -

            Islamevi has active accounts on various social media platforms, such as Facebook (@islamevi.az), Instagram (@islamevi.az), YouTube (İslam Evi), Telegram (@islam_evi), WhatsApp (+994 50

            Islamevi has active accounts on various social media platforms, such as Facebook (@islamevi.az), Instagram (@islamevi.az), YouTube (İslam Evi), Telegram (@islam_evi), WhatsApp (+994 50 123 45 67), and Twitter (@islamevi_az). You can follow them to get daily Islamic reminders, stories, quotes, videos, and more. You can also interact with them and share your feedback and suggestions.

            -

            The physical location and contact details of Islamevi

            -

            Islamevi has a center in Baku, the capital city of Azerbaijan, where you can visit and join their classes, workshops, events, and programs. The address of the center is: İslam Evi, Nizami küç. 123, Bakı AZ1000. You can also call them at +994 12 345 67 89 or email them at info@islamevi.az.

            -

            The benefits of following Islamevi

            -

            Following Islamevi can bring many benefits to your life as a Muslim or a seeker of truth. Here are some of them:

            -

            Learn more about Islam and its teachings

            -

            Islamevi provides you with authentic and reliable information about Islam and its teachings, based on the Quran and the Sunnah. You can learn more about the basics of Islam, such as the pillars of faith and practice, the articles of belief, the sources of legislation, etc. You can also learn more about the advanced topics of Islam, such as the sciences of Quran, Hadith, Fiqh, Aqeedah, etc. You can also learn more about the history, culture, and civilization of Islam and its contributions to humanity.

            -

            Connect with other Muslims and share your experiences

            -

            Islamevi connects you with other Muslims who share your faith and values. You can join their online community and interact with them through their website and social media accounts. You can also join their offline community and meet them in person at their center or events. You can share your experiences, challenges, joys, and sorrows with them. You can also support them, advise them, and learn from them.

            -

            Participate in various events and programs organized by Islamevi

            -

            Islamevi organizes various events and programs for Muslims throughout the year. Some of these include:

            - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
            NameDescriptionDate
            Ramadan ProgramA series of lectures, webinars, quizzes, competitions, and charity projects related to Ramadan.The whole month of Ramadan.
            Eid FestivalA celebration of Eid al-Fitr and Eid al-Adha with prayers, games, food, gifts, and entertainment.The first day of Shawwal and the tenth day of Dhul-Hijjah.
            Quran CompetitionA competition to test the memorization and recitation skills of the participants.The last week of Rajab.
            Hajj WorkshopA workshop to teach the rules and rituals of Hajj and Umrah.The first week of Dhul-Qadah.
            Mawlid CelebrationA celebration of the birth of Prophet Muhammad (peace be upon him) with songs, poems, stories, and lectures.The twelfth day of Rabi al-Awwal.
            -

            Conclusion

            -

            Islamevi is a home for Muslims in Azerbaijan that provides them with Islamic knowledge, guidance, and community. It is a platform where Muslims can learn, practice, and share their faith. It is also a place where non-Muslims can discover the beauty and wisdom of Islam. If you are looking for a reliable source of Islamic information and a supportive network of Muslim friends, you should visit Islamevi today.

            -

            FAQs

            -

            Q: Is Islamevi affiliated with any political or sectarian group?

            -

            A: No, Islamevi is an independent organization that follows the Quran and the Sunnah according to the understanding of the righteous predecessors (Salaf). It does not belong to any political or sectarian group or agenda

            A: No, Islamevi is an independent organization that follows the Quran and the Sunnah according to the understanding of the righteous predecessors (Salaf). It does not belong to any political or sectarian group or agenda.

            -

            Q: How can I support Islamevi financially?

            -

            A: You can support Islamevi financially by donating to their charity and humanitarian projects, such as feeding the poor, helping the orphans, supporting the refugees, etc. You can also sponsor their events and programs, such as Ramadan program, Eid festival, Quran competition, etc. You can donate online through their website or offline at their center.

            -

            Q: How can I volunteer for Islamevi?

            -

            A: You can volunteer for Islamevi by offering your skills, time, and energy to help them in their various activities. You can join their team of writers, editors, translators, designers, developers, teachers, organizers, etc. You can also help them in spreading their message and inviting more people to their platform. You can contact them through their website or social media accounts to express your interest and availability.

            -

            Q: How can I contact Islamevi for any questions or feedback?

            -

            A: You can contact Islamevi for any questions or feedback through their website or social media accounts. You can also call them at +994 12 345 67 89 or email them at info@islamevi.az. They are always happy to hear from you and to assist you in any way possible.

            -

            Q: What are some of the challenges and opportunities that Islamevi faces?

            -

            A: Some of the challenges that Islamevi faces are:

            -
              -
            • Lack of awareness and misconceptions about Islam among the Azerbaijani people.
            • -
            • Lack of resources and funding to sustain and expand their services and activities.
            • -
            • Lack of qualified and committed staff and volunteers to run their operations and projects.
            • -
            -

            Some of the opportunities that Islamevi has are:

            -
              -
            • High demand and interest for Islamic education and guidance among the Azerbaijani people.
            • -
            • High potential and talent of the young Muslim generation in Azerbaijan.
            • -
            • High support and cooperation from the government and other Islamic organizations in Azerbaijan.
            • -

            197e85843d
            -
            -
            \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Ansys 10.0 Software Free Download [HOT].md b/spaces/contluForse/HuggingGPT/assets/Ansys 10.0 Software Free Download [HOT].md deleted file mode 100644 index f75407188373fdd382764aa4713ebad492c96287..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Ansys 10.0 Software Free Download [HOT].md +++ /dev/null @@ -1,20 +0,0 @@ -

            ansys 10.0 software free download


            Download File →→→ https://ssurll.com/2uzxb8



            - -Visit the website to learn more. - -From planning and simulation to real-world application, this comprehensive collection of over 400 case studies from the past three decades is designed to provide readers with practical and applicable solutions. - -This work-study supplement provides complete, sound and interactive answers to all questions, contains the same essential content as the full paper-and-pencil version, and is an invaluable supplement for students working towards the SAE 2003 certification or any other course in automotive technology. - -An introductory module to systems dynamics. This is the first book to apply systems dynamics to project management. It helps readers to understand the basic concepts of system dynamics and how to apply them to the project environment. - -This book explores the relationship between knowledge management and intellectual capital. The book looks at why companies have adopted knowledge management and explores how organisations can build intellectual capital. It looks at the factors that determine knowledge sharing and the impact of intellectual capital on the knowledge economy. It examines the definition of intellectual capital and outlines the benefits that knowledge sharing can bring to the organisation. This book argues that knowledge management is, in fact, the application of intellectual capital. The book concludes with an examination of the concepts of knowledge and the importance of intellectual capital and then examines how organisations can build their intellectual capital. - -This book is aimed at students studying Biotechnology for the degree of the BSc (Hons) Applied Biosciences or the undergraduate Diploma in Applied Biosciences. It provides up-to-date information in the field of applied biosciences for students in these courses and helps to develop essential lab skills. The book covers the main aspects of biotechnology for the students at all levels from those in their first year studying Biotechnology. - -This book provides a platform for the organised growth of companies. It provides a coherent framework for understanding the issues involved in developing and managing a business, and the practical skills required. It highlights the importance of strategy, managing from the centre, leadership, risk assessment, financial performance and controlling and reporting. It provides easy to follow models, case studies and exercises to reinforce concepts and practise them in an applied context. - -The book examines the working of the European Unions internal market and the reform proposals to improve the functioning of the internal market, which includes revising the treaties, establishing a new treaty and operating mechanism, setting out priorities for a new legal framework and how to influence policy. This book also investigates the reform proposals for the Single Market of Services and for Data Protection. The book 4fefd39f24
            -
            -
            -

            diff --git a/spaces/contluForse/HuggingGPT/assets/DC.Comics.-.Bombshells.004..2015...digital...Minutemen-Thoth..cbr..-.Nem.-..md b/spaces/contluForse/HuggingGPT/assets/DC.Comics.-.Bombshells.004..2015...digital...Minutemen-Thoth..cbr..-.Nem.-..md deleted file mode 100644 index 5f5c0c41b9834423f4043ecc922d0b79b07bfa5f..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/DC.Comics.-.Bombshells.004..2015...digital...Minutemen-Thoth..cbr..-.Nem.-..md +++ /dev/null @@ -1,6 +0,0 @@ -

            DC.Comics.-.Bombshells.004..2015...digital...Minutemen-Thoth..cbr..-.Nem.-.


            DOWNLOAD ○○○ https://ssurll.com/2uzyFa



            -
            - aaccfb2cb3
            -
            -
            -

            diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/image/io.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/image/io.py deleted file mode 100644 index 4e8f1877978840aede93774d86643b129751db13..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/image/io.py +++ /dev/null @@ -1,258 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import io -import os.path as osp -from pathlib import Path - -import cv2 -import numpy as np -from cv2 import (IMREAD_COLOR, IMREAD_GRAYSCALE, IMREAD_IGNORE_ORIENTATION, - IMREAD_UNCHANGED) - -from annotator.mmpkg.mmcv.utils import check_file_exist, is_str, mkdir_or_exist - -try: - from turbojpeg import TJCS_RGB, TJPF_BGR, TJPF_GRAY, TurboJPEG -except ImportError: - TJCS_RGB = TJPF_GRAY = TJPF_BGR = TurboJPEG = None - -try: - from PIL import Image, ImageOps -except ImportError: - Image = None - -try: - import tifffile -except ImportError: - tifffile = None - -jpeg = None -supported_backends = ['cv2', 'turbojpeg', 'pillow', 'tifffile'] - -imread_flags = { - 'color': IMREAD_COLOR, - 'grayscale': IMREAD_GRAYSCALE, - 'unchanged': IMREAD_UNCHANGED, - 'color_ignore_orientation': IMREAD_IGNORE_ORIENTATION | IMREAD_COLOR, - 'grayscale_ignore_orientation': - IMREAD_IGNORE_ORIENTATION | IMREAD_GRAYSCALE -} - -imread_backend = 'cv2' - - -def use_backend(backend): - """Select a backend for image decoding. - - Args: - backend (str): The image decoding backend type. Options are `cv2`, - `pillow`, `turbojpeg` (see https://github.com/lilohuang/PyTurboJPEG) - and `tifffile`. `turbojpeg` is faster but it only supports `.jpeg` - file format. - """ - assert backend in supported_backends - global imread_backend - imread_backend = backend - if imread_backend == 'turbojpeg': - if TurboJPEG is None: - raise ImportError('`PyTurboJPEG` is not installed') - global jpeg - if jpeg is None: - jpeg = TurboJPEG() - elif imread_backend == 'pillow': - if Image is None: - raise ImportError('`Pillow` is not installed') - elif imread_backend == 'tifffile': - if tifffile is None: - raise ImportError('`tifffile` is not installed') - - -def _jpegflag(flag='color', channel_order='bgr'): - channel_order = channel_order.lower() - if channel_order not in ['rgb', 'bgr']: - raise ValueError('channel order must be either "rgb" or "bgr"') - - if flag == 'color': - if channel_order == 'bgr': - return TJPF_BGR - elif channel_order == 'rgb': - return TJCS_RGB - elif flag == 'grayscale': - return TJPF_GRAY - else: - raise ValueError('flag must be "color" or "grayscale"') - - -def _pillow2array(img, flag='color', channel_order='bgr'): - """Convert a pillow image to numpy array. - - Args: - img (:obj:`PIL.Image.Image`): The image loaded using PIL - flag (str): Flags specifying the color type of a loaded image, - candidates are 'color', 'grayscale' and 'unchanged'. - Default to 'color'. - channel_order (str): The channel order of the output image array, - candidates are 'bgr' and 'rgb'. Default to 'bgr'. - - Returns: - np.ndarray: The converted numpy array - """ - channel_order = channel_order.lower() - if channel_order not in ['rgb', 'bgr']: - raise ValueError('channel order must be either "rgb" or "bgr"') - - if flag == 'unchanged': - array = np.array(img) - if array.ndim >= 3 and array.shape[2] >= 3: # color image - array[:, :, :3] = array[:, :, (2, 1, 0)] # RGB to BGR - else: - # Handle exif orientation tag - if flag in ['color', 'grayscale']: - img = ImageOps.exif_transpose(img) - # If the image mode is not 'RGB', convert it to 'RGB' first. - if img.mode != 'RGB': - if img.mode != 'LA': - # Most formats except 'LA' can be directly converted to RGB - img = img.convert('RGB') - else: - # When the mode is 'LA', the default conversion will fill in - # the canvas with black, which sometimes shadows black objects - # in the foreground. - # - # Therefore, a random color (124, 117, 104) is used for canvas - img_rgba = img.convert('RGBA') - img = Image.new('RGB', img_rgba.size, (124, 117, 104)) - img.paste(img_rgba, mask=img_rgba.split()[3]) # 3 is alpha - if flag in ['color', 'color_ignore_orientation']: - array = np.array(img) - if channel_order != 'rgb': - array = array[:, :, ::-1] # RGB to BGR - elif flag in ['grayscale', 'grayscale_ignore_orientation']: - img = img.convert('L') - array = np.array(img) - else: - raise ValueError( - 'flag must be "color", "grayscale", "unchanged", ' - f'"color_ignore_orientation" or "grayscale_ignore_orientation"' - f' but got {flag}') - return array - - -def imread(img_or_path, flag='color', channel_order='bgr', backend=None): - """Read an image. - - Args: - img_or_path (ndarray or str or Path): Either a numpy array or str or - pathlib.Path. If it is a numpy array (loaded image), then - it will be returned as is. - flag (str): Flags specifying the color type of a loaded image, - candidates are `color`, `grayscale`, `unchanged`, - `color_ignore_orientation` and `grayscale_ignore_orientation`. - By default, `cv2` and `pillow` backend would rotate the image - according to its EXIF info unless called with `unchanged` or - `*_ignore_orientation` flags. `turbojpeg` and `tifffile` backend - always ignore image's EXIF info regardless of the flag. - The `turbojpeg` backend only supports `color` and `grayscale`. - channel_order (str): Order of channel, candidates are `bgr` and `rgb`. - backend (str | None): The image decoding backend type. Options are - `cv2`, `pillow`, `turbojpeg`, `tifffile`, `None`. - If backend is None, the global imread_backend specified by - ``mmcv.use_backend()`` will be used. Default: None. - - Returns: - ndarray: Loaded image array. - """ - - if backend is None: - backend = imread_backend - if backend not in supported_backends: - raise ValueError(f'backend: {backend} is not supported. Supported ' - "backends are 'cv2', 'turbojpeg', 'pillow'") - if isinstance(img_or_path, Path): - img_or_path = str(img_or_path) - - if isinstance(img_or_path, np.ndarray): - return img_or_path - elif is_str(img_or_path): - check_file_exist(img_or_path, - f'img file does not exist: {img_or_path}') - if backend == 'turbojpeg': - with open(img_or_path, 'rb') as in_file: - img = jpeg.decode(in_file.read(), - _jpegflag(flag, channel_order)) - if img.shape[-1] == 1: - img = img[:, :, 0] - return img - elif backend == 'pillow': - img = Image.open(img_or_path) - img = _pillow2array(img, flag, channel_order) - return img - elif backend == 'tifffile': - img = tifffile.imread(img_or_path) - return img - else: - flag = imread_flags[flag] if is_str(flag) else flag - img = cv2.imread(img_or_path, flag) - if flag == IMREAD_COLOR and channel_order == 'rgb': - cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img) - return img - else: - raise TypeError('"img" must be a numpy array or a str or ' - 'a pathlib.Path object') - - -def imfrombytes(content, flag='color', channel_order='bgr', backend=None): - """Read an image from bytes. - - Args: - content (bytes): Image bytes got from files or other streams. - flag (str): Same as :func:`imread`. - backend (str | None): The image decoding backend type. Options are - `cv2`, `pillow`, `turbojpeg`, `None`. If backend is None, the - global imread_backend specified by ``mmcv.use_backend()`` will be - used. Default: None. - - Returns: - ndarray: Loaded image array. - """ - - if backend is None: - backend = imread_backend - if backend not in supported_backends: - raise ValueError(f'backend: {backend} is not supported. Supported ' - "backends are 'cv2', 'turbojpeg', 'pillow'") - if backend == 'turbojpeg': - img = jpeg.decode(content, _jpegflag(flag, channel_order)) - if img.shape[-1] == 1: - img = img[:, :, 0] - return img - elif backend == 'pillow': - buff = io.BytesIO(content) - img = Image.open(buff) - img = _pillow2array(img, flag, channel_order) - return img - else: - img_np = np.frombuffer(content, np.uint8) - flag = imread_flags[flag] if is_str(flag) else flag - img = cv2.imdecode(img_np, flag) - if flag == IMREAD_COLOR and channel_order == 'rgb': - cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img) - return img - - -def imwrite(img, file_path, params=None, auto_mkdir=True): - """Write image to file. - - Args: - img (ndarray): Image array to be written. - file_path (str): Image file path. - params (None or list): Same as opencv :func:`imwrite` interface. - auto_mkdir (bool): If the parent folder of `file_path` does not exist, - whether to create it automatically. - - Returns: - bool: Successful or not. - """ - if auto_mkdir: - dir_name = osp.abspath(osp.dirname(file_path)) - mkdir_or_exist(dir_name) - return cv2.imwrite(file_path, img, params) diff --git a/spaces/course-demos/whisper-small/README.md b/spaces/course-demos/whisper-small/README.md deleted file mode 100644 index db18712bd3b533bd10b9e7f18057c59d6acf2641..0000000000000000000000000000000000000000 --- a/spaces/course-demos/whisper-small/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Whisper Small -emoji: 🌍 -colorFrom: pink -colorTo: pink -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/cybercorejapan/human-detection-docker/models/detectors/mmyolov8.py b/spaces/cybercorejapan/human-detection-docker/models/detectors/mmyolov8.py deleted file mode 100644 index 5814748777cd3e262b268d1de9ee7637f4df79c9..0000000000000000000000000000000000000000 --- a/spaces/cybercorejapan/human-detection-docker/models/detectors/mmyolov8.py +++ /dev/null @@ -1,86 +0,0 @@ -from typing import List, Dict, Tuple -import numpy as np -import torch -from .yolov7 import YOLOBase -from models.base.trt_base import TRT_Base -from models.base.onnx_base import ONNX_Base - -class MMYOLOv8TRT(TRT_Base, YOLOBase): - def __init__(self, - preprocess_cfg: Dict=dict( - border_color=(114, 114, 114), - auto=False, - scaleFill=True, - scaleup=True, - stride=32), - nms_agnostic_cfg: Dict=dict( - type='nms', - iou_threshold=0.9, - class_agnostic=True), - score_thr=0.1, - img_shape: Tuple[int, int]=(640, 640), - batch_size: int=32, - model_path: str="", - device: str='0',): - """ YOLOv8 TRT class for inference, which is based on TRT_Base and YOLOBase. - """ - self.img_shape = img_shape - self.batch_size = batch_size - input_shape = (self.batch_size, *self.img_shape) - super().__init__(input_shape, model_path, device) - YOLOBase.__init__(self, preprocess_cfg=preprocess_cfg, - nms_agnostic_cfg=nms_agnostic_cfg, - score_thr=score_thr, - use_torch=True) - - def infer_batch(self, image_batch: np.ndarray) -> List[Dict]: - """ Batch inference function for batch input image. - - Args: - image_batch (np.ndarray): batch of input image. - """ - - tensor_data, height, width, ratio, dwdh = self.preprocess(image_batch) - self.change_runtime_dimension(input_shape=(len(tensor_data), 3, height, width)) - self.model['binding_addrs']['input'] = int(tensor_data.data_ptr()) - self.model['context'].execute_v2(list(self.model['binding_addrs'].values())) - dets = self.model['bindings']['dets'].data.cpu() - classes = self.model['bindings']['labels'].data.cpu() - boxes = dets[:,:,:4] - scores = dets[:,:,4] - - return self.post_process(boxes, scores, classes, ratio, dwdh) - -class MMYOLOv8ONNX(ONNX_Base, YOLOBase): - def __init__(self, - preprocess_cfg, - nms_agnostic_cfg, - score_thr=0.1, - img_shape: Tuple[int, int]=(640, 640), - batch_size: int=32, - model_path: str="", - device: str='0',): - """ YOLOv7 ONNX class for inference, which is based on ONNX_Base and YOLOBase. - """ - self.img_shape = img_shape - self.batch_size = batch_size - input_shape = (self.batch_size, *self.img_shape) - super().__init__(input_shape, model_path, device) - YOLOBase.__init__(self, - preprocess_cfg=preprocess_cfg, - nms_agnostic_cfg=nms_agnostic_cfg, - score_thr=score_thr, - use_torch=False) - - def infer_batch(self, image_batch: np.ndarray) -> List[Dict]: - - numpy_array_data, height, width, ratio, dwdh = self.preprocess(image_batch) - numpy_array_data = numpy_array_data.astype(np.float32) - results = super().infer_batch(numpy_array_data) - dets, classes = results - dets = torch.from_numpy(dets) - classes = torch.from_numpy(classes) - boxes = dets[:,:,:4] - scores = dets[:,:,4] - - return self.post_process(boxes, scores, classes, ratio, dwdh) diff --git a/spaces/cymic/VITS-Tokaiteio/models.py b/spaces/cymic/VITS-Tokaiteio/models.py deleted file mode 100644 index f5acdeb2bedd47897348407c0ae55c9a160da881..0000000000000000000000000000000000000000 --- a/spaces/cymic/VITS-Tokaiteio/models.py +++ /dev/null @@ -1,534 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/cymic/Waifu_Diffusion_Webui/modules/realesrgan_model.py b/spaces/cymic/Waifu_Diffusion_Webui/modules/realesrgan_model.py deleted file mode 100644 index 60d9708625b0d09d6637a8486389db9dbf0da5a2..0000000000000000000000000000000000000000 --- a/spaces/cymic/Waifu_Diffusion_Webui/modules/realesrgan_model.py +++ /dev/null @@ -1,135 +0,0 @@ -import os -import sys -import traceback - -import numpy as np -from PIL import Image -from basicsr.utils.download_util import load_file_from_url -from realesrgan import RealESRGANer - -from modules.upscaler import Upscaler, UpscalerData -from modules.paths import models_path -from modules.shared import cmd_opts, opts - - -class UpscalerRealESRGAN(Upscaler): - def __init__(self, path): - self.name = "RealESRGAN" - self.model_path = os.path.join(models_path, self.name) - self.user_path = path - super().__init__() - try: - from basicsr.archs.rrdbnet_arch import RRDBNet - from realesrgan import RealESRGANer - from realesrgan.archs.srvgg_arch import SRVGGNetCompact - self.enable = True - self.scalers = [] - scalers = self.load_models(path) - for scaler in scalers: - if scaler.name in opts.realesrgan_enabled_models: - self.scalers.append(scaler) - - except Exception: - print("Error importing Real-ESRGAN:", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) - self.enable = False - self.scalers = [] - - def do_upscale(self, img, path): - if not self.enable: - return img - - info = self.load_model(path) - if not os.path.exists(info.data_path): - print("Unable to load RealESRGAN model: %s" % info.name) - return img - - upsampler = RealESRGANer( - scale=info.scale, - model_path=info.data_path, - model=info.model(), - half=not cmd_opts.no_half, - tile=opts.ESRGAN_tile, - tile_pad=opts.ESRGAN_tile_overlap, - ) - - upsampled = upsampler.enhance(np.array(img), outscale=info.scale)[0] - - image = Image.fromarray(upsampled) - return image - - def load_model(self, path): - try: - info = None - for scaler in self.scalers: - if scaler.data_path == path: - info = scaler - - if info is None: - print(f"Unable to find model info: {path}") - return None - - model_file = load_file_from_url(url=info.data_path, model_dir=self.model_path, progress=True) - info.data_path = model_file - return info - except Exception as e: - print(f"Error making Real-ESRGAN models list: {e}", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) - return None - - def load_models(self, _): - return get_realesrgan_models(self) - - -def get_realesrgan_models(scaler): - try: - from basicsr.archs.rrdbnet_arch import RRDBNet - from realesrgan.archs.srvgg_arch import SRVGGNetCompact - models = [ - UpscalerData( - name="R-ESRGAN General 4xV3", - path="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth", - scale=4, - upscaler=scaler, - model=lambda: SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=32, upscale=4, act_type='prelu') - ), - UpscalerData( - name="R-ESRGAN General WDN 4xV3", - path="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-wdn-x4v3.pth", - scale=4, - upscaler=scaler, - model=lambda: SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=32, upscale=4, act_type='prelu') - ), - UpscalerData( - name="R-ESRGAN AnimeVideo", - path="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth", - scale=4, - upscaler=scaler, - model=lambda: SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu') - ), - UpscalerData( - name="R-ESRGAN 4x+", - path="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth", - scale=4, - upscaler=scaler, - model=lambda: RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - ), - UpscalerData( - name="R-ESRGAN 4x+ Anime6B", - path="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth", - scale=4, - upscaler=scaler, - model=lambda: RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4) - ), - UpscalerData( - name="R-ESRGAN 2x+", - path="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth", - scale=2, - upscaler=scaler, - model=lambda: RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=2) - ), - ] - return models - except Exception as e: - print("Error making Real-ESRGAN models list:", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) diff --git a/spaces/dawood17/SayBot_Enchancer/CodeFormer/facelib/detection/yolov5face/models/__init__.py b/spaces/dawood17/SayBot_Enchancer/CodeFormer/facelib/detection/yolov5face/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/filenames.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/filenames.py deleted file mode 100644 index d279f89cc82cc280370d09ebdb16cb301f62aa57..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/filenames.py +++ /dev/null @@ -1,246 +0,0 @@ -""" -This module implements the algorithm for converting between a "user name" - -something that a user can choose arbitrarily inside a font editor - and a file -name suitable for use in a wide range of operating systems and filesystems. - -The `UFO 3 specification `_ -provides an example of an algorithm for such conversion, which avoids illegal -characters, reserved file names, ambiguity between upper- and lower-case -characters, and clashes with existing files. - -This code was originally copied from -`ufoLib `_ -by Tal Leming and is copyright (c) 2005-2016, The RoboFab Developers: - -- Erik van Blokland -- Tal Leming -- Just van Rossum -""" - - -illegalCharacters = r"\" * + / : < > ? [ \ ] | \0".split(" ") -illegalCharacters += [chr(i) for i in range(1, 32)] -illegalCharacters += [chr(0x7F)] -reservedFileNames = "CON PRN AUX CLOCK$ NUL A:-Z: COM1".lower().split(" ") -reservedFileNames += "LPT1 LPT2 LPT3 COM2 COM3 COM4".lower().split(" ") -maxFileNameLength = 255 - - -class NameTranslationError(Exception): - pass - - -def userNameToFileName(userName, existing=[], prefix="", suffix=""): - """Converts from a user name to a file name. - - Takes care to avoid illegal characters, reserved file names, ambiguity between - upper- and lower-case characters, and clashes with existing files. - - Args: - userName (str): The input file name. - existing: A case-insensitive list of all existing file names. - prefix: Prefix to be prepended to the file name. - suffix: Suffix to be appended to the file name. - - Returns: - A suitable filename. - - Raises: - NameTranslationError: If no suitable name could be generated. - - Examples:: - - >>> userNameToFileName("a") == "a" - True - >>> userNameToFileName("A") == "A_" - True - >>> userNameToFileName("AE") == "A_E_" - True - >>> userNameToFileName("Ae") == "A_e" - True - >>> userNameToFileName("ae") == "ae" - True - >>> userNameToFileName("aE") == "aE_" - True - >>> userNameToFileName("a.alt") == "a.alt" - True - >>> userNameToFileName("A.alt") == "A_.alt" - True - >>> userNameToFileName("A.Alt") == "A_.A_lt" - True - >>> userNameToFileName("A.aLt") == "A_.aL_t" - True - >>> userNameToFileName(u"A.alT") == "A_.alT_" - True - >>> userNameToFileName("T_H") == "T__H_" - True - >>> userNameToFileName("T_h") == "T__h" - True - >>> userNameToFileName("t_h") == "t_h" - True - >>> userNameToFileName("F_F_I") == "F__F__I_" - True - >>> userNameToFileName("f_f_i") == "f_f_i" - True - >>> userNameToFileName("Aacute_V.swash") == "A_acute_V_.swash" - True - >>> userNameToFileName(".notdef") == "_notdef" - True - >>> userNameToFileName("con") == "_con" - True - >>> userNameToFileName("CON") == "C_O_N_" - True - >>> userNameToFileName("con.alt") == "_con.alt" - True - >>> userNameToFileName("alt.con") == "alt._con" - True - """ - # the incoming name must be a str - if not isinstance(userName, str): - raise ValueError("The value for userName must be a string.") - # establish the prefix and suffix lengths - prefixLength = len(prefix) - suffixLength = len(suffix) - # replace an initial period with an _ - # if no prefix is to be added - if not prefix and userName[0] == ".": - userName = "_" + userName[1:] - # filter the user name - filteredUserName = [] - for character in userName: - # replace illegal characters with _ - if character in illegalCharacters: - character = "_" - # add _ to all non-lower characters - elif character != character.lower(): - character += "_" - filteredUserName.append(character) - userName = "".join(filteredUserName) - # clip to 255 - sliceLength = maxFileNameLength - prefixLength - suffixLength - userName = userName[:sliceLength] - # test for illegal files names - parts = [] - for part in userName.split("."): - if part.lower() in reservedFileNames: - part = "_" + part - parts.append(part) - userName = ".".join(parts) - # test for clash - fullName = prefix + userName + suffix - if fullName.lower() in existing: - fullName = handleClash1(userName, existing, prefix, suffix) - # finished - return fullName - - -def handleClash1(userName, existing=[], prefix="", suffix=""): - """ - existing should be a case-insensitive list - of all existing file names. - - >>> prefix = ("0" * 5) + "." - >>> suffix = "." + ("0" * 10) - >>> existing = ["a" * 5] - - >>> e = list(existing) - >>> handleClash1(userName="A" * 5, existing=e, - ... prefix=prefix, suffix=suffix) == ( - ... '00000.AAAAA000000000000001.0000000000') - True - - >>> e = list(existing) - >>> e.append(prefix + "aaaaa" + "1".zfill(15) + suffix) - >>> handleClash1(userName="A" * 5, existing=e, - ... prefix=prefix, suffix=suffix) == ( - ... '00000.AAAAA000000000000002.0000000000') - True - - >>> e = list(existing) - >>> e.append(prefix + "AAAAA" + "2".zfill(15) + suffix) - >>> handleClash1(userName="A" * 5, existing=e, - ... prefix=prefix, suffix=suffix) == ( - ... '00000.AAAAA000000000000001.0000000000') - True - """ - # if the prefix length + user name length + suffix length + 15 is at - # or past the maximum length, silce 15 characters off of the user name - prefixLength = len(prefix) - suffixLength = len(suffix) - if prefixLength + len(userName) + suffixLength + 15 > maxFileNameLength: - l = prefixLength + len(userName) + suffixLength + 15 - sliceLength = maxFileNameLength - l - userName = userName[:sliceLength] - finalName = None - # try to add numbers to create a unique name - counter = 1 - while finalName is None: - name = userName + str(counter).zfill(15) - fullName = prefix + name + suffix - if fullName.lower() not in existing: - finalName = fullName - break - else: - counter += 1 - if counter >= 999999999999999: - break - # if there is a clash, go to the next fallback - if finalName is None: - finalName = handleClash2(existing, prefix, suffix) - # finished - return finalName - - -def handleClash2(existing=[], prefix="", suffix=""): - """ - existing should be a case-insensitive list - of all existing file names. - - >>> prefix = ("0" * 5) + "." - >>> suffix = "." + ("0" * 10) - >>> existing = [prefix + str(i) + suffix for i in range(100)] - - >>> e = list(existing) - >>> handleClash2(existing=e, prefix=prefix, suffix=suffix) == ( - ... '00000.100.0000000000') - True - - >>> e = list(existing) - >>> e.remove(prefix + "1" + suffix) - >>> handleClash2(existing=e, prefix=prefix, suffix=suffix) == ( - ... '00000.1.0000000000') - True - - >>> e = list(existing) - >>> e.remove(prefix + "2" + suffix) - >>> handleClash2(existing=e, prefix=prefix, suffix=suffix) == ( - ... '00000.2.0000000000') - True - """ - # calculate the longest possible string - maxLength = maxFileNameLength - len(prefix) - len(suffix) - maxValue = int("9" * maxLength) - # try to find a number - finalName = None - counter = 1 - while finalName is None: - fullName = prefix + str(counter) + suffix - if fullName.lower() not in existing: - finalName = fullName - break - else: - counter += 1 - if counter >= maxValue: - break - # raise an error if nothing has been found - if finalName is None: - raise NameTranslationError("No unique name could be found.") - # finished - return finalName - - -if __name__ == "__main__": - import doctest - import sys - - sys.exit(doctest.testmod().failed) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/tfmLib.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/tfmLib.py deleted file mode 100644 index 673373ffdf4825d4caac4ce5959eb0ee9e11046c..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/tfmLib.py +++ /dev/null @@ -1,460 +0,0 @@ -"""Module for reading TFM (TeX Font Metrics) files. - -The TFM format is described in the TFtoPL WEB source code, whose typeset form -can be found on `CTAN `_. - - >>> from fontTools.tfmLib import TFM - >>> tfm = TFM("Tests/tfmLib/data/cmr10.tfm") - >>> - >>> # Accessing an attribute gets you metadata. - >>> tfm.checksum - 1274110073 - >>> tfm.designsize - 10.0 - >>> tfm.codingscheme - 'TeX text' - >>> tfm.family - 'CMR' - >>> tfm.seven_bit_safe_flag - False - >>> tfm.face - 234 - >>> tfm.extraheader - {} - >>> tfm.fontdimens - {'SLANT': 0.0, 'SPACE': 0.33333396911621094, 'STRETCH': 0.16666698455810547, 'SHRINK': 0.11111164093017578, 'XHEIGHT': 0.4305553436279297, 'QUAD': 1.0000028610229492, 'EXTRASPACE': 0.11111164093017578} - >>> # Accessing a character gets you its metrics. - >>> # “width” is always available, other metrics are available only when - >>> # applicable. All values are relative to “designsize”. - >>> tfm.chars[ord("g")] - {'width': 0.5000019073486328, 'height': 0.4305553436279297, 'depth': 0.1944446563720703, 'italic': 0.013888359069824219} - >>> # Kerning and ligature can be accessed as well. - >>> tfm.kerning[ord("c")] - {104: -0.02777862548828125, 107: -0.02777862548828125} - >>> tfm.ligatures[ord("f")] - {105: ('LIG', 12), 102: ('LIG', 11), 108: ('LIG', 13)} -""" - -from types import SimpleNamespace - -from fontTools.misc.sstruct import calcsize, unpack, unpack2 - -SIZES_FORMAT = """ - > - lf: h # length of the entire file, in words - lh: h # length of the header data, in words - bc: h # smallest character code in the font - ec: h # largest character code in the font - nw: h # number of words in the width table - nh: h # number of words in the height table - nd: h # number of words in the depth table - ni: h # number of words in the italic correction table - nl: h # number of words in the ligature/kern table - nk: h # number of words in the kern table - ne: h # number of words in the extensible character table - np: h # number of font parameter words -""" - -SIZES_SIZE = calcsize(SIZES_FORMAT) - -FIXED_FORMAT = "12.20F" - -HEADER_FORMAT1 = f""" - > - checksum: L - designsize: {FIXED_FORMAT} -""" - -HEADER_FORMAT2 = f""" - {HEADER_FORMAT1} - codingscheme: 40p -""" - -HEADER_FORMAT3 = f""" - {HEADER_FORMAT2} - family: 20p -""" - -HEADER_FORMAT4 = f""" - {HEADER_FORMAT3} - seven_bit_safe_flag: ? - ignored: x - ignored: x - face: B -""" - -HEADER_SIZE1 = calcsize(HEADER_FORMAT1) -HEADER_SIZE2 = calcsize(HEADER_FORMAT2) -HEADER_SIZE3 = calcsize(HEADER_FORMAT3) -HEADER_SIZE4 = calcsize(HEADER_FORMAT4) - -LIG_KERN_COMMAND = """ - > - skip_byte: B - next_char: B - op_byte: B - remainder: B -""" - -BASE_PARAMS = [ - "SLANT", - "SPACE", - "STRETCH", - "SHRINK", - "XHEIGHT", - "QUAD", - "EXTRASPACE", -] - -MATHSY_PARAMS = [ - "NUM1", - "NUM2", - "NUM3", - "DENOM1", - "DENOM2", - "SUP1", - "SUP2", - "SUP3", - "SUB1", - "SUB2", - "SUPDROP", - "SUBDROP", - "DELIM1", - "DELIM2", - "AXISHEIGHT", -] - -MATHEX_PARAMS = [ - "DEFAULTRULETHICKNESS", - "BIGOPSPACING1", - "BIGOPSPACING2", - "BIGOPSPACING3", - "BIGOPSPACING4", - "BIGOPSPACING5", -] - -VANILLA = 0 -MATHSY = 1 -MATHEX = 2 - -UNREACHABLE = 0 -PASSTHROUGH = 1 -ACCESSABLE = 2 - -NO_TAG = 0 -LIG_TAG = 1 -LIST_TAG = 2 -EXT_TAG = 3 - -STOP_FLAG = 128 -KERN_FLAG = 128 - - -class TFMException(Exception): - def __init__(self, message): - super().__init__(message) - - -class TFM: - def __init__(self, file): - self._read(file) - - def __repr__(self): - return ( - f"" - ) - - def _read(self, file): - if hasattr(file, "read"): - data = file.read() - else: - with open(file, "rb") as fp: - data = fp.read() - - self._data = data - - if len(data) < SIZES_SIZE: - raise TFMException("Too short input file") - - sizes = SimpleNamespace() - unpack2(SIZES_FORMAT, data, sizes) - - # Do some file structure sanity checks. - # TeX and TFtoPL do additional functional checks and might even correct - # “errors” in the input file, but we instead try to output the file as - # it is as long as it is parsable, even if the data make no sense. - - if sizes.lf < 0: - raise TFMException("The file claims to have negative or zero length!") - - if len(data) < sizes.lf * 4: - raise TFMException("The file has fewer bytes than it claims!") - - for name, length in vars(sizes).items(): - if length < 0: - raise TFMException("The subfile size: '{name}' is negative!") - - if sizes.lh < 2: - raise TFMException(f"The header length is only {sizes.lh}!") - - if sizes.bc > sizes.ec + 1 or sizes.ec > 255: - raise TFMException( - f"The character code range {sizes.bc}..{sizes.ec} is illegal!" - ) - - if sizes.nw == 0 or sizes.nh == 0 or sizes.nd == 0 or sizes.ni == 0: - raise TFMException("Incomplete subfiles for character dimensions!") - - if sizes.ne > 256: - raise TFMException(f"There are {ne} extensible recipes!") - - if sizes.lf != ( - 6 - + sizes.lh - + (sizes.ec - sizes.bc + 1) - + sizes.nw - + sizes.nh - + sizes.nd - + sizes.ni - + sizes.nl - + sizes.nk - + sizes.ne - + sizes.np - ): - raise TFMException("Subfile sizes don’t add up to the stated total") - - # Subfile offsets, used in the helper function below. These all are - # 32-bit word offsets not 8-bit byte offsets. - char_base = 6 + sizes.lh - sizes.bc - width_base = char_base + sizes.ec + 1 - height_base = width_base + sizes.nw - depth_base = height_base + sizes.nh - italic_base = depth_base + sizes.nd - lig_kern_base = italic_base + sizes.ni - kern_base = lig_kern_base + sizes.nl - exten_base = kern_base + sizes.nk - param_base = exten_base + sizes.ne - - # Helper functions for accessing individual data. If this looks - # nonidiomatic Python, I blame the effect of reading the literate WEB - # documentation of TFtoPL. - def char_info(c): - return 4 * (char_base + c) - - def width_index(c): - return data[char_info(c)] - - def noneexistent(c): - return c < sizes.bc or c > sizes.ec or width_index(c) == 0 - - def height_index(c): - return data[char_info(c) + 1] // 16 - - def depth_index(c): - return data[char_info(c) + 1] % 16 - - def italic_index(c): - return data[char_info(c) + 2] // 4 - - def tag(c): - return data[char_info(c) + 2] % 4 - - def remainder(c): - return data[char_info(c) + 3] - - def width(c): - r = 4 * (width_base + width_index(c)) - return read_fixed(r, "v")["v"] - - def height(c): - r = 4 * (height_base + height_index(c)) - return read_fixed(r, "v")["v"] - - def depth(c): - r = 4 * (depth_base + depth_index(c)) - return read_fixed(r, "v")["v"] - - def italic(c): - r = 4 * (italic_base + italic_index(c)) - return read_fixed(r, "v")["v"] - - def exten(c): - return 4 * (exten_base + remainder(c)) - - def lig_step(i): - return 4 * (lig_kern_base + i) - - def lig_kern_command(i): - command = SimpleNamespace() - unpack2(LIG_KERN_COMMAND, data[i:], command) - return command - - def kern(i): - r = 4 * (kern_base + i) - return read_fixed(r, "v")["v"] - - def param(i): - return 4 * (param_base + i) - - def read_fixed(index, key, obj=None): - ret = unpack2(f">;{key}:{FIXED_FORMAT}", data[index:], obj) - return ret[0] - - # Set all attributes to empty values regardless of the header size. - unpack(HEADER_FORMAT4, [0] * HEADER_SIZE4, self) - - offset = 24 - length = sizes.lh * 4 - self.extraheader = {} - if length >= HEADER_SIZE4: - rest = unpack2(HEADER_FORMAT4, data[offset:], self)[1] - if self.face < 18: - s = self.face % 2 - b = self.face // 2 - self.face = "MBL"[b % 3] + "RI"[s] + "RCE"[b // 3] - for i in range(sizes.lh - HEADER_SIZE4 // 4): - rest = unpack2(f">;HEADER{i + 18}:l", rest, self.extraheader)[1] - elif length >= HEADER_SIZE3: - unpack2(HEADER_FORMAT3, data[offset:], self) - elif length >= HEADER_SIZE2: - unpack2(HEADER_FORMAT2, data[offset:], self) - elif length >= HEADER_SIZE1: - unpack2(HEADER_FORMAT1, data[offset:], self) - - self.fonttype = VANILLA - scheme = self.codingscheme.upper() - if scheme.startswith("TEX MATH SY"): - self.fonttype = MATHSY - elif scheme.startswith("TEX MATH EX"): - self.fonttype = MATHEX - - self.fontdimens = {} - for i in range(sizes.np): - name = f"PARAMETER{i+1}" - if i <= 6: - name = BASE_PARAMS[i] - elif self.fonttype == MATHSY and i <= 21: - name = MATHSY_PARAMS[i - 7] - elif self.fonttype == MATHEX and i <= 12: - name = MATHEX_PARAMS[i - 7] - read_fixed(param(i), name, self.fontdimens) - - lig_kern_map = {} - self.right_boundary_char = None - self.left_boundary_char = None - if sizes.nl > 0: - cmd = lig_kern_command(lig_step(0)) - if cmd.skip_byte == 255: - self.right_boundary_char = cmd.next_char - - cmd = lig_kern_command(lig_step((sizes.nl - 1))) - if cmd.skip_byte == 255: - self.left_boundary_char = 256 - r = 256 * cmd.op_byte + cmd.remainder - lig_kern_map[self.left_boundary_char] = r - - self.chars = {} - for c in range(sizes.bc, sizes.ec + 1): - if width_index(c) > 0: - self.chars[c] = info = {} - info["width"] = width(c) - if height_index(c) > 0: - info["height"] = height(c) - if depth_index(c) > 0: - info["depth"] = depth(c) - if italic_index(c) > 0: - info["italic"] = italic(c) - char_tag = tag(c) - if char_tag == NO_TAG: - pass - elif char_tag == LIG_TAG: - lig_kern_map[c] = remainder(c) - elif char_tag == LIST_TAG: - info["nextlarger"] = remainder(c) - elif char_tag == EXT_TAG: - info["varchar"] = varchar = {} - for i in range(4): - part = data[exten(c) + i] - if i == 3 or part > 0: - name = "rep" - if i == 0: - name = "top" - elif i == 1: - name = "mid" - elif i == 2: - name = "bot" - if noneexistent(part): - varchar[name] = c - else: - varchar[name] = part - - self.ligatures = {} - self.kerning = {} - for c, i in sorted(lig_kern_map.items()): - cmd = lig_kern_command(lig_step(i)) - if cmd.skip_byte > STOP_FLAG: - i = 256 * cmd.op_byte + cmd.remainder - - while i < sizes.nl: - cmd = lig_kern_command(lig_step(i)) - if cmd.skip_byte > STOP_FLAG: - pass - else: - if cmd.op_byte >= KERN_FLAG: - r = 256 * (cmd.op_byte - KERN_FLAG) + cmd.remainder - self.kerning.setdefault(c, {})[cmd.next_char] = kern(r) - else: - r = cmd.op_byte - if r == 4 or (r > 7 and r != 11): - # Ligature step with nonstandard code, we output - # the code verbatim. - lig = r - else: - lig = "" - if r % 4 > 1: - lig += "/" - lig += "LIG" - if r % 2 != 0: - lig += "/" - while r > 3: - lig += ">" - r -= 4 - self.ligatures.setdefault(c, {})[cmd.next_char] = ( - lig, - cmd.remainder, - ) - - if cmd.skip_byte >= STOP_FLAG: - break - i += cmd.skip_byte + 1 - - -if __name__ == "__main__": - import sys - - tfm = TFM(sys.argv[1]) - print( - "\n".join( - x - for x in [ - f"tfm.checksum={tfm.checksum}", - f"tfm.designsize={tfm.designsize}", - f"tfm.codingscheme={tfm.codingscheme}", - f"tfm.fonttype={tfm.fonttype}", - f"tfm.family={tfm.family}", - f"tfm.seven_bit_safe_flag={tfm.seven_bit_safe_flag}", - f"tfm.face={tfm.face}", - f"tfm.extraheader={tfm.extraheader}", - f"tfm.fontdimens={tfm.fontdimens}", - f"tfm.right_boundary_char={tfm.right_boundary_char}", - f"tfm.left_boundary_char={tfm.left_boundary_char}", - f"tfm.kerning={tfm.kerning}", - f"tfm.ligatures={tfm.ligatures}", - f"tfm.chars={tfm.chars}", - ] - ) - ) - print(tfm) diff --git a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/arcface_torch/configs/ms1mv3_mbf.py b/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/arcface_torch/configs/ms1mv3_mbf.py deleted file mode 100644 index b8a00d6305eeda5a94788017afc1cda0d4a4cd2a..0000000000000000000000000000000000000000 --- a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/arcface_torch/configs/ms1mv3_mbf.py +++ /dev/null @@ -1,26 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.loss = "arcface" -config.network = "mbf" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 1.0 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 2e-4 -config.batch_size = 128 -config.lr = 0.1 # batch size is 512 - -config.rec = "/train_tmp/ms1m-retinaface-t1" -config.num_classes = 93431 -config.num_image = 5179510 -config.num_epoch = 30 -config.warmup_epoch = -1 -config.decay_epoch = [10, 20, 25] -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/facerecon_model.py b/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/facerecon_model.py deleted file mode 100644 index 27705a642e013101c1d624cb0cf7e5955d0614ad..0000000000000000000000000000000000000000 --- a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/facerecon_model.py +++ /dev/null @@ -1,220 +0,0 @@ -"""This script defines the face reconstruction model for Deep3DFaceRecon_pytorch -""" - -import numpy as np -import torch -from sad_talker.src.face3d.models.base_model import BaseModel -from sad_talker.src.face3d.models import networks -from sad_talker.src.face3d.models.bfm import ParametricFaceModel -from sad_talker.src.face3d.models.losses import perceptual_loss, photo_loss, reg_loss, reflectance_loss, landmark_loss -from sad_talker.src.face3d.util import util -from sad_talker.src.face3d.util.nvdiffrast import MeshRenderer -# from sad_talker.src.face3d.util.preprocess import estimate_norm_torch - -import trimesh -from scipy.io import savemat - -class FaceReconModel(BaseModel): - - @staticmethod - def modify_commandline_options(parser, is_train=False): - """ Configures options specific for CUT model - """ - # net structure and parameters - parser.add_argument('--net_recon', type=str, default='resnet50', choices=['resnet18', 'resnet34', 'resnet50'], help='network structure') - parser.add_argument('--init_path', type=str, default='./checkpoints/init_model/resnet50-0676ba61.pth') - parser.add_argument('--use_last_fc', type=util.str2bool, nargs='?', const=True, default=False, help='zero initialize the last fc') - parser.add_argument('--bfm_folder', type=str, default='./checkpoints/BFM_Fitting/') - parser.add_argument('--bfm_model', type=str, default='BFM_model_front.mat', help='bfm model') - - # renderer parameters - parser.add_argument('--focal', type=float, default=1015.) - parser.add_argument('--center', type=float, default=112.) - parser.add_argument('--camera_d', type=float, default=10.) - parser.add_argument('--z_near', type=float, default=5.) - parser.add_argument('--z_far', type=float, default=15.) - - if is_train: - # training parameters - parser.add_argument('--net_recog', type=str, default='r50', choices=['r18', 'r43', 'r50'], help='face recog network structure') - parser.add_argument('--net_recog_path', type=str, default='checkpoints/recog_model/ms1mv3_arcface_r50_fp16/backbone.pth') - parser.add_argument('--use_crop_face', type=util.str2bool, nargs='?', const=True, default=False, help='use crop mask for photo loss') - parser.add_argument('--use_predef_M', type=util.str2bool, nargs='?', const=True, default=False, help='use predefined M for predicted face') - - - # augmentation parameters - parser.add_argument('--shift_pixs', type=float, default=10., help='shift pixels') - parser.add_argument('--scale_delta', type=float, default=0.1, help='delta scale factor') - parser.add_argument('--rot_angle', type=float, default=10., help='rot angles, degree') - - # loss weights - parser.add_argument('--w_feat', type=float, default=0.2, help='weight for feat loss') - parser.add_argument('--w_color', type=float, default=1.92, help='weight for loss loss') - parser.add_argument('--w_reg', type=float, default=3.0e-4, help='weight for reg loss') - parser.add_argument('--w_id', type=float, default=1.0, help='weight for id_reg loss') - parser.add_argument('--w_exp', type=float, default=0.8, help='weight for exp_reg loss') - parser.add_argument('--w_tex', type=float, default=1.7e-2, help='weight for tex_reg loss') - parser.add_argument('--w_gamma', type=float, default=10.0, help='weight for gamma loss') - parser.add_argument('--w_lm', type=float, default=1.6e-3, help='weight for lm loss') - parser.add_argument('--w_reflc', type=float, default=5.0, help='weight for reflc loss') - - opt, _ = parser.parse_known_args() - parser.set_defaults( - focal=1015., center=112., camera_d=10., use_last_fc=False, z_near=5., z_far=15. - ) - if is_train: - parser.set_defaults( - use_crop_face=True, use_predef_M=False - ) - return parser - - def __init__(self, opt): - """Initialize this model class. - - Parameters: - opt -- training/test options - - A few things can be done here. - - (required) call the initialization function of BaseModel - - define loss function, visualization images, model names, and optimizers - """ - BaseModel.__init__(self, opt) # call the initialization method of BaseModel - - self.visual_names = ['output_vis'] - self.model_names = ['net_recon'] - self.parallel_names = self.model_names + ['renderer'] - - self.facemodel = ParametricFaceModel( - bfm_folder=opt.bfm_folder, camera_distance=opt.camera_d, focal=opt.focal, center=opt.center, - is_train=self.isTrain, default_name=opt.bfm_model - ) - - fov = 2 * np.arctan(opt.center / opt.focal) * 180 / np.pi - self.renderer = MeshRenderer( - rasterize_fov=fov, znear=opt.z_near, zfar=opt.z_far, rasterize_size=int(2 * opt.center) - ) - - if self.isTrain: - self.loss_names = ['all', 'feat', 'color', 'lm', 'reg', 'gamma', 'reflc'] - - self.net_recog = networks.define_net_recog( - net_recog=opt.net_recog, pretrained_path=opt.net_recog_path - ) - # loss func name: (compute_%s_loss) % loss_name - self.compute_feat_loss = perceptual_loss - self.comupte_color_loss = photo_loss - self.compute_lm_loss = landmark_loss - self.compute_reg_loss = reg_loss - self.compute_reflc_loss = reflectance_loss - - self.optimizer = torch.optim.Adam(self.net_recon.parameters(), lr=opt.lr) - self.optimizers = [self.optimizer] - self.parallel_names += ['net_recog'] - # Our program will automatically call to define schedulers, load networks, and print networks - - def set_input(self, input): - """Unpack input data from the dataloader and perform necessary pre-processing steps. - - Parameters: - input: a dictionary that contains the data itself and its metadata information. - """ - self.input_img = input['imgs'].to(self.device) - self.atten_mask = input['msks'].to(self.device) if 'msks' in input else None - self.gt_lm = input['lms'].to(self.device) if 'lms' in input else None - self.trans_m = input['M'].to(self.device) if 'M' in input else None - self.image_paths = input['im_paths'] if 'im_paths' in input else None - - def forward(self, output_coeff, device): - self.facemodel.to(device) - self.pred_vertex, self.pred_tex, self.pred_color, self.pred_lm = \ - self.facemodel.compute_for_render(output_coeff) - self.pred_mask, _, self.pred_face = self.renderer( - self.pred_vertex, self.facemodel.face_buf, feat=self.pred_color) - - self.pred_coeffs_dict = self.facemodel.split_coeff(output_coeff) - - - def compute_losses(self): - """Calculate losses, gradients, and update network weights; called in every training iteration""" - - assert self.net_recog.training == False - trans_m = self.trans_m - if not self.opt.use_predef_M: - trans_m = estimate_norm_torch(self.pred_lm, self.input_img.shape[-2]) - - pred_feat = self.net_recog(self.pred_face, trans_m) - gt_feat = self.net_recog(self.input_img, self.trans_m) - self.loss_feat = self.opt.w_feat * self.compute_feat_loss(pred_feat, gt_feat) - - face_mask = self.pred_mask - if self.opt.use_crop_face: - face_mask, _, _ = self.renderer(self.pred_vertex, self.facemodel.front_face_buf) - - face_mask = face_mask.detach() - self.loss_color = self.opt.w_color * self.comupte_color_loss( - self.pred_face, self.input_img, self.atten_mask * face_mask) - - loss_reg, loss_gamma = self.compute_reg_loss(self.pred_coeffs_dict, self.opt) - self.loss_reg = self.opt.w_reg * loss_reg - self.loss_gamma = self.opt.w_gamma * loss_gamma - - self.loss_lm = self.opt.w_lm * self.compute_lm_loss(self.pred_lm, self.gt_lm) - - self.loss_reflc = self.opt.w_reflc * self.compute_reflc_loss(self.pred_tex, self.facemodel.skin_mask) - - self.loss_all = self.loss_feat + self.loss_color + self.loss_reg + self.loss_gamma \ - + self.loss_lm + self.loss_reflc - - - def optimize_parameters(self, isTrain=True): - self.forward() - self.compute_losses() - """Update network weights; it will be called in every training iteration.""" - if isTrain: - self.optimizer.zero_grad() - self.loss_all.backward() - self.optimizer.step() - - def compute_visuals(self): - with torch.no_grad(): - input_img_numpy = 255. * self.input_img.detach().cpu().permute(0, 2, 3, 1).numpy() - output_vis = self.pred_face * self.pred_mask + (1 - self.pred_mask) * self.input_img - output_vis_numpy_raw = 255. * output_vis.detach().cpu().permute(0, 2, 3, 1).numpy() - - if self.gt_lm is not None: - gt_lm_numpy = self.gt_lm.cpu().numpy() - pred_lm_numpy = self.pred_lm.detach().cpu().numpy() - output_vis_numpy = util.draw_landmarks(output_vis_numpy_raw, gt_lm_numpy, 'b') - output_vis_numpy = util.draw_landmarks(output_vis_numpy, pred_lm_numpy, 'r') - - output_vis_numpy = np.concatenate((input_img_numpy, - output_vis_numpy_raw, output_vis_numpy), axis=-2) - else: - output_vis_numpy = np.concatenate((input_img_numpy, - output_vis_numpy_raw), axis=-2) - - self.output_vis = torch.tensor( - output_vis_numpy / 255., dtype=torch.float32 - ).permute(0, 3, 1, 2).to(self.device) - - def save_mesh(self, name): - - recon_shape = self.pred_vertex # get reconstructed shape - recon_shape[..., -1] = 10 - recon_shape[..., -1] # from camera space to world space - recon_shape = recon_shape.cpu().numpy()[0] - recon_color = self.pred_color - recon_color = recon_color.cpu().numpy()[0] - tri = self.facemodel.face_buf.cpu().numpy() - mesh = trimesh.Trimesh(vertices=recon_shape, faces=tri, vertex_colors=np.clip(255. * recon_color, 0, 255).astype(np.uint8)) - mesh.export(name) - - def save_coeff(self,name): - - pred_coeffs = {key:self.pred_coeffs_dict[key].cpu().numpy() for key in self.pred_coeffs_dict} - pred_lm = self.pred_lm.cpu().numpy() - pred_lm = np.stack([pred_lm[:,:,0],self.input_img.shape[2]-1-pred_lm[:,:,1]],axis=2) # transfer to image coordinate - pred_coeffs['lm68'] = pred_lm - savemat(name,pred_coeffs) - - - diff --git a/spaces/deepwisdom/MetaGPT/metagpt/tools/sd_engine.py b/spaces/deepwisdom/MetaGPT/metagpt/tools/sd_engine.py deleted file mode 100644 index c33f67a515caf1edfdda8f866109eaa85ff12585..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/metagpt/tools/sd_engine.py +++ /dev/null @@ -1,136 +0,0 @@ -# -*- coding: utf-8 -*- -# @Date : 2023/7/19 16:28 -# @Author : stellahong (stellahong@fuzhi.ai) -# @Desc : -import asyncio -import base64 -import io -import json -import os -from os.path import join -from typing import List - -from aiohttp import ClientSession -from PIL import Image, PngImagePlugin - -from metagpt.config import Config -from metagpt.logs import logger - -config = Config() - -payload = { - "prompt": "", - "negative_prompt": "(easynegative:0.8),black, dark,Low resolution", - "override_settings": {"sd_model_checkpoint": "galaxytimemachinesGTM_photoV20"}, - "seed": -1, - "batch_size": 1, - "n_iter": 1, - "steps": 20, - "cfg_scale": 7, - "width": 512, - "height": 768, - "restore_faces": False, - "tiling": False, - "do_not_save_samples": False, - "do_not_save_grid": False, - "enable_hr": False, - "hr_scale": 2, - "hr_upscaler": "Latent", - "hr_second_pass_steps": 0, - "hr_resize_x": 0, - "hr_resize_y": 0, - "hr_upscale_to_x": 0, - "hr_upscale_to_y": 0, - "truncate_x": 0, - "truncate_y": 0, - "applied_old_hires_behavior_to": None, - "eta": None, - "sampler_index": "DPM++ SDE Karras", - "alwayson_scripts": {}, -} - -default_negative_prompt = "(easynegative:0.8),black, dark,Low resolution" - - -class SDEngine: - def __init__(self): - # Initialize the SDEngine with configuration - self.config = Config() - self.sd_url = self.config.get("SD_URL") - self.sd_t2i_url = f"{self.sd_url}{self.config.get('SD_T2I_API')}" - # Define default payload settings for SD API - self.payload = payload - logger.info(self.sd_t2i_url) - - def construct_payload( - self, - prompt, - negtive_prompt=default_negative_prompt, - width=512, - height=512, - sd_model="galaxytimemachinesGTM_photoV20", - ): - # Configure the payload with provided inputs - self.payload["prompt"] = prompt - self.payload["negtive_prompt"] = negtive_prompt - self.payload["width"] = width - self.payload["height"] = height - self.payload["override_settings"]["sd_model_checkpoint"] = sd_model - logger.info(f"call sd payload is {self.payload}") - return self.payload - - def _save(self, imgs, save_name=""): - save_dir = CONFIG.get_workspace() / "resources" / "SD_Output" - if not os.path.exists(save_dir): - os.makedirs(save_dir, exist_ok=True) - batch_decode_base64_to_image(imgs, save_dir, save_name=save_name) - - async def run_t2i(self, prompts: List): - # Asynchronously run the SD API for multiple prompts - session = ClientSession() - for payload_idx, payload in enumerate(prompts): - results = await self.run(url=self.sd_t2i_url, payload=payload, session=session) - self._save(results, save_name=f"output_{payload_idx}") - await session.close() - - async def run(self, url, payload, session): - # Perform the HTTP POST request to the SD API - async with session.post(url, json=payload, timeout=600) as rsp: - data = await rsp.read() - - rsp_json = json.loads(data) - imgs = rsp_json["images"] - logger.info(f"callback rsp json is {rsp_json.keys()}") - return imgs - - async def run_i2i(self): - # todo: 添加图生图接口调用 - raise NotImplementedError - - async def run_sam(self): - # todo:添加SAM接口调用 - raise NotImplementedError - - -def decode_base64_to_image(img, save_name): - image = Image.open(io.BytesIO(base64.b64decode(img.split(",", 1)[0]))) - pnginfo = PngImagePlugin.PngInfo() - logger.info(save_name) - image.save(f"{save_name}.png", pnginfo=pnginfo) - return pnginfo, image - - -def batch_decode_base64_to_image(imgs, save_dir="", save_name=""): - for idx, _img in enumerate(imgs): - save_name = join(save_dir, save_name) - decode_base64_to_image(_img, save_name=save_name) - - -if __name__ == "__main__": - engine = SDEngine() - prompt = "pixel style, game design, a game interface should be minimalistic and intuitive with the score and high score displayed at the top. The snake and its food should be easily distinguishable. The game should have a simple color scheme, with a contrasting color for the snake and its food. Complete interface boundary" - - engine.construct_payload(prompt) - - event_loop = asyncio.get_event_loop() - event_loop.run_until_complete(engine.run_t2i(prompt)) diff --git a/spaces/diacanFperku/AutoGPT/Abaqus 6.5 Torrent 2021.md b/spaces/diacanFperku/AutoGPT/Abaqus 6.5 Torrent 2021.md deleted file mode 100644 index 00a4bb2425901e13f0d843a093024dd0c90dcdc8..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Abaqus 6.5 Torrent 2021.md +++ /dev/null @@ -1,7 +0,0 @@ - -

            last summer, i installed abaqus 6.0-3 in ubuntu 12.04 and set up a 2d simulation with parameters as in the reference manual (same build and material as in the manual). when i start the simulation and stop it at a given time, the crack is still propagating. the crack does not stop nor does the simulation stop. how do i stop the simulation?

            -

            abaqus 6.5 torrent


            Download ✏ ✏ ✏ https://gohhs.com/2uFVzY



            -

            hello, i'm an engineer who is struggling to learn abaqus/finite element from the abaqus 6.5 manual. i really can't understand the concepts in there. i love computer science so i am used to programming. my question is: where do i start so i can learn how to enter the abaqus model to do my research and analyze my problems? what do i need to know what kind of topics i need to study before i can learn programming abaqus to run my models. actually, i would like to get your advice. please know that i am just a beginner so, please, be patient with me. i would love to learn from someone who has experience. thank you in advance!

            -

            the abaqus software is a product of dassault systemes simulia corp., snc
            reiner. to learn more about abaqus, please contact the abaqus customer support. 07/03/2014. our development team is dedicated to making sure. welcome to abaqus, a comprehensive fea.. abaqus 6.5 documentation > user's guide. description of abaqus eos / ht / pest models. chapter 6. methods for numerical simulation of composites. abaqus tutorial for combing commercial and third party products. comaprison of multipas & abaqus-addfem for simulation of composites
            description of the mission and outcomes of the project
            design and implementation of a software framework called multipas (multiscale particulate automata simulation) for
            . buy abaqus in dubai - abaqus 6.5.pdf buy abaqus in dubai - abaqus 6.rar buy abaqus in dubai - abaqus 6.txt. download abaqus 6.5 torrent. get free software to download for free. collection of the best software. full list of download free software. see the list of all free download abaqus 6. softqare price see full description. download the full version free directly from the author's web-site or from rapidshare.

            899543212b
            -
            -
            \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Assassin-Creed-Brotherhood-Serial-Number-Activation-29.md b/spaces/diacanFperku/AutoGPT/Assassin-Creed-Brotherhood-Serial-Number-Activation-29.md deleted file mode 100644 index 92c32377e8e93f8c1608523589ba96d8707e33b2..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Assassin-Creed-Brotherhood-Serial-Number-Activation-29.md +++ /dev/null @@ -1,60 +0,0 @@ -Assassin Creed Brotherhood Serial Number Activation 29 - - - -Download File >> [https://maudaracte.blogspot.com/?file=2tvJcQ](https://maudaracte.blogspot.com/?file=2tvJcQ) - - - - - - - - - -Here is a possible title and article with SEO optimization and HTML formatting for the keyword "Assassin Creed Brotherhood Serial Number Activation 29": - -How to Activate Assassin Creed Brotherhood Serial Number 29 - -If you are looking for a way to activate Assassin Creed Brotherhood serial number 29, you are not alone. Many gamers have faced this issue when trying to play this popular action-adventure game on their PC. In this article, we will show you how to solve this problem and enjoy the game without any hassle. - -What is Assassin Creed Brotherhood Serial Number 29? - -Assassin Creed Brotherhood is the third installment in the Assassin Creed series, developed by Ubisoft Montreal and released in 2010. The game follows the adventures of Ezio Auditore da Firenze, a master assassin who fights against the Templars in Renaissance Italy. - -To play the game on PC, you need to have a valid serial key that you can get when you buy the original version of the game. However, some users have reported that they get an error message saying that their serial key has already been accessed and activated when they try to enter it in the Ubisoft launcher. This is what we call Assassin Creed Brotherhood serial number 29. - -Why does Assassin Creed Brotherhood Serial Number 29 happen? - -There are several possible reasons why you may encounter Assassin Creed Brotherhood serial number 29. Some of them are: - - -You have bought a pirated or cracked version of the game that has an invalid or used serial key. -You have entered the serial key incorrectly or mistyped it. -You have installed the game on more than one PC using the same serial key. -You have changed your hardware configuration or updated your operating system after installing the game. -You have a corrupted or outdated Ubisoft launcher that prevents you from activating the game. - - -How to fix Assassin Creed Brotherhood Serial Number 29? - -Depending on the cause of your problem, there are different solutions that you can try to fix Assassin Creed Brotherhood serial number 29. Here are some of them: - - -Make sure you have bought a legitimate copy of the game from a trusted source and that you have a valid serial key. If you have bought a pirated or cracked version of the game, you may need to buy a new one or contact Ubisoft support for assistance. -Check if you have entered the serial key correctly and that there are no spaces or extra characters. You can also copy and paste the serial key from your email confirmation or receipt instead of typing it manually. -If you have installed the game on more than one PC using the same serial key, you may need to uninstall it from one of them or buy another serial key. Each serial key can only be used on one PC at a time. -If you have changed your hardware configuration or updated your operating system after installing the game, you may need to reactivate it using your serial key. You can do this by launching the Ubisoft launcher and clicking on "Activate a product" under "Games". -If you have a corrupted or outdated Ubisoft launcher, you may need to reinstall it or update it to the latest version. You can download the Ubisoft launcher from here. After installing or updating it, restart your PC and try to activate the game again. - - -Conclusion - -Assassin Creed Brotherhood serial number 29 is a common error that many PC gamers face when trying to play this awesome game. However, it is not impossible to fix it. By following the steps above, you should be able to activate your game and enjoy it without any trouble. - -If none of these solutions work for you, you may need to contact Ubisoft support for further help. You can reach them through this link. They will be happy to assist you and resolve your issue as soon as possible. - -We hope this article has been helpful for you and that you have learned how to fix Assassin Creed Brotherhood serial number 29. If you have any questions or comments, feel free to leave them below. We would love to hear from you. dfd1c89656 - - - diff --git a/spaces/diagaiwei/ir_chinese_medqa/colbert/index.py b/spaces/diagaiwei/ir_chinese_medqa/colbert/index.py deleted file mode 100644 index 60d01a67c16e46d8e36718ebe90a723b7d541e1a..0000000000000000000000000000000000000000 --- a/spaces/diagaiwei/ir_chinese_medqa/colbert/index.py +++ /dev/null @@ -1,17 +0,0 @@ - - -# TODO: This is the loaded index, underneath a searcher. - - -""" -## Operations: - -index = Index(index='/path/to/index') -index.load_to_memory() - -batch_of_pids = [2324,32432,98743,23432] -index.lookup(batch_of_pids, device='cuda:0') -> (N, doc_maxlen, dim) - -index.iterate_over_parts() - -""" diff --git a/spaces/digitalxingtong/Luzao-Bert-Vits2/modules.py b/spaces/digitalxingtong/Luzao-Bert-Vits2/modules.py deleted file mode 100644 index 92e0f32a51c472bfd1659a50a95a95d195281d2b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Luzao-Bert-Vits2/modules.py +++ /dev/null @@ -1,452 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform -from attentions import Encoder - -LRELU_SLOPE = 0.1 - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x -class TransformerCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - n_layers, - n_heads, - p_dropout=0, - filter_channels=0, - mean_only=False, - wn_sharing_parameter=None, - gin_channels = 0 - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = Encoder(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = gin_channels) if wn_sharing_parameter is None else wn_sharing_parameter - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/dineshreddy/WALT/mmdet/datasets/custom.py b/spaces/dineshreddy/WALT/mmdet/datasets/custom.py deleted file mode 100644 index 356f01ede6456312920b6fe8fa618258d8898075..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/datasets/custom.py +++ /dev/null @@ -1,334 +0,0 @@ -import os.path as osp -import warnings -from collections import OrderedDict - -import mmcv -import numpy as np -from mmcv.utils import print_log -from torch.utils.data import Dataset - -from mmdet.core import eval_map, eval_recalls -from .builder import DATASETS -from .pipelines import Compose - - -@DATASETS.register_module() -class CustomDataset(Dataset): - """Custom dataset for detection. - - The annotation format is shown as follows. The `ann` field is optional for - testing. - - .. code-block:: none - - [ - { - 'filename': 'a.jpg', - 'width': 1280, - 'height': 720, - 'ann': { - 'bboxes': (n, 4) in (x1, y1, x2, y2) order. - 'labels': (n, ), - 'bboxes_ignore': (k, 4), (optional field) - 'labels_ignore': (k, 4) (optional field) - } - }, - ... - ] - - Args: - ann_file (str): Annotation file path. - pipeline (list[dict]): Processing pipeline. - classes (str | Sequence[str], optional): Specify classes to load. - If is None, ``cls.CLASSES`` will be used. Default: None. - data_root (str, optional): Data root for ``ann_file``, - ``img_prefix``, ``seg_prefix``, ``proposal_file`` if specified. - test_mode (bool, optional): If set True, annotation will not be loaded. - filter_empty_gt (bool, optional): If set true, images without bounding - boxes of the dataset's classes will be filtered out. This option - only works when `test_mode=False`, i.e., we never filter images - during tests. - """ - - CLASSES = None - - def __init__(self, - ann_file, - pipeline, - classes=None, - data_root=None, - img_prefix='', - seg_prefix=None, - proposal_file=None, - test_mode=False, - filter_empty_gt=True): - self.ann_file = ann_file - self.data_root = data_root - self.img_prefix = img_prefix - self.seg_prefix = seg_prefix - self.proposal_file = proposal_file - self.test_mode = test_mode - self.filter_empty_gt = filter_empty_gt - self.CLASSES = self.get_classes(classes) - - # join paths if data_root is specified - if self.data_root is not None: - if not osp.isabs(self.ann_file): - self.ann_file = osp.join(self.data_root, self.ann_file) - if not (self.img_prefix is None or osp.isabs(self.img_prefix)): - self.img_prefix = osp.join(self.data_root, self.img_prefix) - if not (self.seg_prefix is None or osp.isabs(self.seg_prefix)): - self.seg_prefix = osp.join(self.data_root, self.seg_prefix) - if not (self.proposal_file is None - or osp.isabs(self.proposal_file)): - self.proposal_file = osp.join(self.data_root, - self.proposal_file) - # load annotations (and proposals) - self.data_infos = self.load_annotations(self.ann_file) - - if self.proposal_file is not None: - self.proposals = self.load_proposals(self.proposal_file) - else: - self.proposals = None - - # filter images too small and containing no annotations - if not test_mode: - valid_inds = self._filter_imgs() - self.data_infos = [self.data_infos[i] for i in valid_inds] - if self.proposals is not None: - self.proposals = [self.proposals[i] for i in valid_inds] - # set group flag for the sampler - self._set_group_flag() - - # processing pipeline - self.pipeline = Compose(pipeline) - - def __len__(self): - """Total number of samples of data.""" - return len(self.data_infos) - - def load_annotations(self, ann_file): - """Load annotation from annotation file.""" - return mmcv.load(ann_file) - - def load_proposals(self, proposal_file): - """Load proposal from proposal file.""" - return mmcv.load(proposal_file) - - def get_ann_info(self, idx): - """Get annotation by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - - return self.data_infos[idx]['ann'] - - def get_cat_ids(self, idx): - """Get category ids by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - return self.data_infos[idx]['ann']['labels'].astype(np.int).tolist() - - def pre_pipeline(self, results): - """Prepare results dict for pipeline.""" - results['img_prefix'] = self.img_prefix - results['seg_prefix'] = self.seg_prefix - results['proposal_file'] = self.proposal_file - results['bbox_fields'] = [] - results['mask_fields'] = [] - results['seg_fields'] = [] - - def _filter_imgs(self, min_size=32): - """Filter images too small.""" - if self.filter_empty_gt: - warnings.warn( - 'CustomDataset does not support filtering empty gt images.') - valid_inds = [] - for i, img_info in enumerate(self.data_infos): - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - return valid_inds - - def _set_group_flag(self): - """Set flag according to image aspect ratio. - - Images with aspect ratio greater than 1 will be set as group 1, - otherwise group 0. - """ - self.flag = np.zeros(len(self), dtype=np.uint8) - for i in range(len(self)): - img_info = self.data_infos[i] - if img_info['width'] / img_info['height'] > 1: - self.flag[i] = 1 - - def _rand_another(self, idx): - """Get another random index from the same group as the given index.""" - pool = np.where(self.flag == self.flag[idx])[0] - return np.random.choice(pool) - - def __getitem__(self, idx): - """Get training/test data after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Training/test data (with annotation if `test_mode` is set \ - True). - """ - - if self.test_mode: - while 1: - try: - return self.prepare_test_img(idx) - except: - idx = idx+1 - #return self.prepare_test_img(idx+1) - - #return self.prepare_test_img(idx) - while True: - try: - data = self.prepare_train_img(idx) - except: - data = self.prepare_train_img(idx-1) - - if data is None: - idx = self._rand_another(idx) - continue - return data - - def prepare_train_img(self, idx): - """Get training data and annotations after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Training data and annotation after pipeline with new keys \ - introduced by pipeline. - """ - - img_info = self.data_infos[idx] - ann_info = self.get_ann_info(idx) - results = dict(img_info=img_info, ann_info=ann_info) - if self.proposals is not None: - results['proposals'] = self.proposals[idx] - self.pre_pipeline(results) - return self.pipeline(results) - - def prepare_test_img(self, idx): - """Get testing data after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Testing data after pipeline with new keys introduced by \ - pipeline. - """ - - img_info = self.data_infos[idx] - results = dict(img_info=img_info) - if self.proposals is not None: - results['proposals'] = self.proposals[idx] - self.pre_pipeline(results) - return self.pipeline(results) - - @classmethod - def get_classes(cls, classes=None): - """Get class names of current dataset. - - Args: - classes (Sequence[str] | str | None): If classes is None, use - default CLASSES defined by builtin dataset. If classes is a - string, take it as a file name. The file contains the name of - classes where each line contains one class name. If classes is - a tuple or list, override the CLASSES defined by the dataset. - - Returns: - tuple[str] or list[str]: Names of categories of the dataset. - """ - if classes is None: - return cls.CLASSES - - if isinstance(classes, str): - # take it as a file path - class_names = mmcv.list_from_file(classes) - elif isinstance(classes, (tuple, list)): - class_names = classes - else: - raise ValueError(f'Unsupported type {type(classes)} of classes.') - - return class_names - - def format_results(self, results, **kwargs): - """Place holder to format result to dataset specific output.""" - - def evaluate(self, - results, - metric='mAP', - logger=None, - proposal_nums=(100, 300, 1000), - iou_thr=0.5, - scale_ranges=None): - """Evaluate the dataset. - - Args: - results (list): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. - logger (logging.Logger | None | str): Logger used for printing - related information during evaluation. Default: None. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thr (float | list[float]): IoU threshold. Default: 0.5. - scale_ranges (list[tuple] | None): Scale ranges for evaluating mAP. - Default: None. - """ - - if not isinstance(metric, str): - assert len(metric) == 1 - metric = metric[0] - allowed_metrics = ['mAP', 'recall'] - if metric not in allowed_metrics: - raise KeyError(f'metric {metric} is not supported') - annotations = [self.get_ann_info(i) for i in range(len(self))] - eval_results = OrderedDict() - iou_thrs = [iou_thr] if isinstance(iou_thr, float) else iou_thr - if metric == 'mAP': - assert isinstance(iou_thrs, list) - mean_aps = [] - for iou_thr in iou_thrs: - print_log(f'\n{"-" * 15}iou_thr: {iou_thr}{"-" * 15}') - mean_ap, _ = eval_map( - results, - annotations, - scale_ranges=scale_ranges, - iou_thr=iou_thr, - dataset=self.CLASSES, - logger=logger) - mean_aps.append(mean_ap) - eval_results[f'AP{int(iou_thr * 100):02d}'] = round(mean_ap, 3) - eval_results['mAP'] = sum(mean_aps) / len(mean_aps) - elif metric == 'recall': - gt_bboxes = [ann['bboxes'] for ann in annotations] - recalls = eval_recalls( - gt_bboxes, results, proposal_nums, iou_thr, logger=logger) - for i, num in enumerate(proposal_nums): - for j, iou in enumerate(iou_thrs): - eval_results[f'recall@{num}@{iou}'] = recalls[i, j] - if recalls.shape[1] > 1: - ar = recalls.mean(axis=1) - for i, num in enumerate(proposal_nums): - eval_results[f'AR@{num}'] = ar[i] - return eval_results diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_datasets/synthtext.py b/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_datasets/synthtext.py deleted file mode 100644 index fb9a44b3422dae5a9788d39b0901335dfc6076a9..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_datasets/synthtext.py +++ /dev/null @@ -1,18 +0,0 @@ -dataset_type = 'TextDetDataset' -data_root = 'data/synthtext' - -train = dict( - type=dataset_type, - ann_file=f'{data_root}/instances_training.lmdb', - loader=dict( - type='AnnFileLoader', - repeat=1, - file_format='lmdb', - parser=dict( - type='LineJsonParser', - keys=['file_name', 'height', 'width', 'annotations'])), - img_prefix=f'{data_root}/imgs', - pipeline=None) - -train_list = [train] -test_list = [train] diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/textdet/psenet/psenet_r50_fpnf_600e_icdar2015.py b/spaces/dinhminh20521597/OCR_DEMO/configs/textdet/psenet/psenet_r50_fpnf_600e_icdar2015.py deleted file mode 100644 index fbaacc19b19f6f8284eb65c7d2d2aa95e8051427..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/textdet/psenet/psenet_r50_fpnf_600e_icdar2015.py +++ /dev/null @@ -1,35 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/schedules/schedule_adam_step_600e.py', - '../../_base_/det_models/psenet_r50_fpnf.py', - '../../_base_/det_datasets/icdar2015.py', - '../../_base_/det_pipelines/psenet_pipeline.py' -] - -model = {{_base_.model_quad}} - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline_icdar2015 = {{_base_.test_pipeline_icdar2015}} - -data = dict( - samples_per_gpu=8, - workers_per_gpu=2, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_icdar2015), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_icdar2015)) - -evaluation = dict(interval=10, metric='hmean-iou') diff --git a/spaces/diy2023/databricks-dolly-v1-6b/app.py b/spaces/diy2023/databricks-dolly-v1-6b/app.py deleted file mode 100644 index 671cdd19c85ad2351038f5fffc40c71a5657b4c8..0000000000000000000000000000000000000000 --- a/spaces/diy2023/databricks-dolly-v1-6b/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/databricks/dolly-v1-6b").launch() \ No newline at end of file diff --git a/spaces/docs-demos/bart-large-mnli/README.md b/spaces/docs-demos/bart-large-mnli/README.md deleted file mode 100644 index 98a3cf74d10c26dc43346681590dea5655c4e12a..0000000000000000000000000000000000000000 --- a/spaces/docs-demos/bart-large-mnli/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: BART -emoji: 🐠 -colorFrom: indigo -colorTo: red -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/doevent/animegan-v2-for-videos/README.md b/spaces/doevent/animegan-v2-for-videos/README.md deleted file mode 100644 index 9970b8f305fbf1c35599ab14b46236dc543ba015..0000000000000000000000000000000000000000 --- a/spaces/doevent/animegan-v2-for-videos/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: AnimeGANv2 On Videos -emoji: 🔥 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.0.9 -app_file: app.py -pinned: false ---- - -# AnimeGAN-v2 For Videos - -[![Generic badge](https://img.shields.io/badge/🤗-Open%20In%20Spaces-blue.svg)](https://huggingface.co/spaces/nateraw/animegan-v2-for-videos) - -Apply AnimeGAN-v2 across frames of a video - ---- - -Autogenerated using [this template](https://github.com/nateraw/spaces-template) \ No newline at end of file diff --git a/spaces/doluvor/faster-whisper-webui/app-local.py b/spaces/doluvor/faster-whisper-webui/app-local.py deleted file mode 100644 index c7717d096ca5f95177f0dba03cd62ca729bae9f3..0000000000000000000000000000000000000000 --- a/spaces/doluvor/faster-whisper-webui/app-local.py +++ /dev/null @@ -1,5 +0,0 @@ -# Run the app with no audio file restrictions -from app import create_ui -from src.config import ApplicationConfig - -create_ui(ApplicationConfig.create_default(input_audio_max_duration=-1)) \ No newline at end of file diff --git a/spaces/dovanquyet/PsyPlus/README.md b/spaces/dovanquyet/PsyPlus/README.md deleted file mode 100644 index 97b097702abb6509467c39fd3cc7544965e414e2..0000000000000000000000000000000000000000 --- a/spaces/dovanquyet/PsyPlus/README.md +++ /dev/null @@ -1,25 +0,0 @@ ---- -title: PsyPlus -emoji: 🤖 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.10.0 -app_file: app.py -pinned: false -license: gpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - -For more information about this product, please visit this notion [page](https://www.notion.so/AI-Consulting-Design-Scheme-0a9c5288820d4fec98ecc7cc1e84be51)) (you need to have permission to access this page) - -# Notes - -### 2022/12/20 - -- DONE turning the chatbot to session varible so that different sessions will show different conversation -- Chat flow will trigger euc 200 when detect a negative emotion with prob > threshold. Thus, only euc 100 and free chat consist of chat loop, while euc 200 will pop up sometimes. I set the trigger to NOT be regularly (currently one trigger once during the conversation), because trigger to much will bother users -- Already fix the problem with dialog model. Now it's configured as the same as what it should be. Of course, that does not guarantee of good response -- TODO is written in the main file already -- Successfully convert plain euc 100 and 200 to chat flow \ No newline at end of file diff --git a/spaces/dragonSwing/annotate-anything/GroundingDINO/groundingdino/models/__init__.py b/spaces/dragonSwing/annotate-anything/GroundingDINO/groundingdino/models/__init__.py deleted file mode 100644 index e3413961d1d184b99835eb1e919b052d70298bc6..0000000000000000000000000000000000000000 --- a/spaces/dragonSwing/annotate-anything/GroundingDINO/groundingdino/models/__init__.py +++ /dev/null @@ -1,18 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from .GroundingDINO import build_groundingdino - - -def build_model(args): - # we use register to maintain models from catdet6 on. - from .registry import MODULE_BUILD_FUNCS - - assert args.modelname in MODULE_BUILD_FUNCS._module_dict - build_func = MODULE_BUILD_FUNCS.get(args.modelname) - model = build_func(args) - return model diff --git a/spaces/ds520/bingo/src/components/settings.tsx b/spaces/ds520/bingo/src/components/settings.tsx deleted file mode 100644 index 45ba6044ff9cbe584f62292a49ea2ace9acc1f48..0000000000000000000000000000000000000000 --- a/spaces/ds520/bingo/src/components/settings.tsx +++ /dev/null @@ -1,157 +0,0 @@ -import { useEffect, useState } from 'react' -import { useAtom } from 'jotai' -import { Switch } from '@headlessui/react' -import { toast } from 'react-hot-toast' -import { hashAtom, voiceAtom } from '@/state' -import { - Dialog, - DialogContent, - DialogDescription, - DialogFooter, - DialogHeader, - DialogTitle -} from '@/components/ui/dialog' -import { Button } from './ui/button' -import { Input } from './ui/input' -import { ChunkKeys, parseCookies, extraCurlFromCookie, encodeHeadersToCookie, getCookie, setCookie } from '@/lib/utils' -import { ExternalLink } from './external-link' -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' - - -export function Settings() { - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - const [loc, setLoc] = useAtom(hashAtom) - const [curlValue, setCurlValue] = useState(extraCurlFromCookie(parseCookies(document.cookie, ChunkKeys))) - const [imageOnly, setImageOnly] = useState(getCookie('IMAGE_ONLY') !== '0') - const [enableTTS, setEnableTTS] = useAtom(voiceAtom) - - useEffect(() => { - if (isCopied) { - toast.success('复制成功') - } - }, [isCopied]) - - if (loc === 'settings') { - return ( - setLoc('')} modal> - - - 设置你的用户信息 - - 请使用 Edge 浏览器 - - 打开并登录 Bing - - ,然后再打开 - Challenge 接口 - 右键 》检查。打开开发者工具,在网络里面找到 Create 接口 》右键复制》复制为 cURL(bash),粘贴到此处,然后保存。 -
            - 图文示例: - 如何获取 BING_HEADER - - -
            - -
            - setCurlValue(e.target.value)} - /> -
            - 身份信息仅用于画图(推荐) - setImageOnly(checked)} - > - - -
            - - - - - - - -
            - ) - } else if (loc === 'voice') { - return ( - setLoc('')} modal> - - - 语音设置 - - 目前仅支持 PC 端 Edge 及 Chrome 浏览器 - - - -
            - 启用语音回答 - setEnableTTS(checked)} - > - - -
            - - - - -
            -
            - ) - } - return null -} diff --git a/spaces/dwolfe66/text-generation-webui-space/README.md b/spaces/dwolfe66/text-generation-webui-space/README.md deleted file mode 100644 index 527e068e61d6f694a0c92c6cce7724137bbed79d..0000000000000000000000000000000000000000 --- a/spaces/dwolfe66/text-generation-webui-space/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Text Generation Webui Space -emoji: 🏃 -colorFrom: yellow -colorTo: purple -sdk: gradio -sdk_version: 3.20.1 -app_file: run.py -pinned: false -license: mit -duplicated_from: antonovmaxim/text-generation-webui-space ---- - -Check out this repo https://github.com/oobabooga/text-generation-webui diff --git a/spaces/dylanebert/FarmingGame/Build/Build.loader.js b/spaces/dylanebert/FarmingGame/Build/Build.loader.js deleted file mode 100644 index 624c5f8442f0ea1722cf63eb8ea80823e2d8e2ef..0000000000000000000000000000000000000000 --- a/spaces/dylanebert/FarmingGame/Build/Build.loader.js +++ /dev/null @@ -1 +0,0 @@ -function createUnityInstance(e,t,r){function n(e,r){if(!n.aborted&&t.showBanner)return"error"==r&&(n.aborted=!0),t.showBanner(e,r);switch(r){case"error":console.error(e);break;case"warning":console.warn(e);break;default:console.log(e)}}function o(e){var t=e.reason||e.error,r=t?t.toString():e.message||e.reason||"",n=t&&t.stack?t.stack.toString():"";if(n.startsWith(r)&&(n=n.substring(r.length)),r+="\n"+n.trim(),r&&c.stackTraceRegExp&&c.stackTraceRegExp.test(r)){var o=e.filename||t&&(t.fileName||t.sourceURL)||"",a=e.lineno||t&&(t.lineNumber||t.line)||0;s(r,o,a)}}function a(e){e.preventDefault()}function s(e,t,r){if(e.indexOf("fullscreen error")==-1){if(c.startupErrorHandler)return void c.startupErrorHandler(e,t,r);if(!(c.errorHandler&&c.errorHandler(e,t,r)||(console.log("Invoking error handler due to\n"+e),"function"==typeof dump&&dump("Invoking error handler due to\n"+e),s.didShowErrorMessage))){var e="An error occurred running the Unity content on this page. See your browser JavaScript console for more info. The error was:\n"+e;e.indexOf("DISABLE_EXCEPTION_CATCHING")!=-1?e="An exception has occurred, but exception handling has been disabled in this build. If you are the developer of this content, enable exceptions in your project WebGL player settings to be able to catch the exception or see the stack trace.":e.indexOf("Cannot enlarge memory arrays")!=-1?e="Out of memory. If you are the developer of this content, try allocating more memory to your WebGL build in the WebGL player settings.":e.indexOf("Invalid array buffer length")==-1&&e.indexOf("Invalid typed array length")==-1&&e.indexOf("out of memory")==-1&&e.indexOf("could not allocate memory")==-1||(e="The browser could not allocate enough memory for the WebGL content. If you are the developer of this content, try allocating less memory to your WebGL build in the WebGL player settings."),alert(e),s.didShowErrorMessage=!0}}}function i(e,t){if("symbolsUrl"!=e){var n=c.downloadProgress[e];n||(n=c.downloadProgress[e]={started:!1,finished:!1,lengthComputable:!1,total:0,loaded:0}),"object"!=typeof t||"progress"!=t.type&&"load"!=t.type||(n.started||(n.started=!0,n.lengthComputable=t.lengthComputable),n.total=t.total,n.loaded=t.loaded,"load"==t.type&&(n.finished=!0));var o=0,a=0,s=0,i=0,d=0;for(var e in c.downloadProgress){var n=c.downloadProgress[e];if(!n.started)return 0;s++,n.lengthComputable?(o+=n.loaded,a+=n.total,i++):n.finished||d++}var u=s?(s-d-(a?i*(a-o)/a:0))/s:0;r(.9*u)}}function d(e){i(e);var t=c.cacheControl(c[e]),r=c.companyName&&c.productName?c.cachedFetch:c.fetchWithProgress,o=c[e],a=/file:\/\//.exec(o)?"same-origin":void 0,s=r(c[e],{method:"GET",companyName:c.companyName,productName:c.productName,control:t,mode:a,onProgress:function(t){i(e,t)}});return s.then(function(e){return e.parsedBody}).catch(function(t){var r="Failed to download file "+c[e];"file:"==location.protocol?n(r+". Loading web pages via a file:// URL without a web server is not supported by this browser. Please use a local development web server to host Unity content, or use the Unity Build and Run option.","error"):console.error(r)})}function u(){return new Promise(function(e,t){var r=document.createElement("script");r.src=c.frameworkUrl,r.onload=function(){if("undefined"==typeof unityFramework||!unityFramework){var t=[["br","br"],["gz","gzip"]];for(var o in t){var a=t[o];if(c.frameworkUrl.endsWith("."+a[0])){var s="Unable to parse "+c.frameworkUrl+"!";if("file:"==location.protocol)return void n(s+" Loading pre-compressed (brotli or gzip) content via a file:// URL without a web server is not supported by this browser. Please use a local development web server to host compressed Unity content, or use the Unity Build and Run option.","error");if(s+=' This can happen if build compression was enabled but web server hosting the content was misconfigured to not serve the file with HTTP Response Header "Content-Encoding: '+a[1]+'" present. Check browser Console and Devtools Network tab to debug.',"br"==a[0]&&"http:"==location.protocol){var i=["localhost","127.0.0.1"].indexOf(location.hostname)!=-1?"":"Migrate your server to use HTTPS.";s=/Firefox/.test(navigator.userAgent)?"Unable to parse "+c.frameworkUrl+'!
            If using custom web server, verify that web server is sending .br files with HTTP Response Header "Content-Encoding: br". Brotli compression may not be supported in Firefox over HTTP connections. '+i+' See https://bugzilla.mozilla.org/show_bug.cgi?id=1670675 for more information.':"Unable to parse "+c.frameworkUrl+'!
            If using custom web server, verify that web server is sending .br files with HTTP Response Header "Content-Encoding: br". Brotli compression may not be supported over HTTP connections. Migrate your server to use HTTPS.'}return void n(s,"error")}}n("Unable to parse "+c.frameworkUrl+"! The file is corrupt, or compression was misconfigured? (check Content-Encoding HTTP Response Header on web server)","error")}var d=unityFramework;unityFramework=null,r.onload=null,e(d)},r.onerror=function(e){n("Unable to load file "+c.frameworkUrl+"! Check that the file exists on the remote server. (also check browser Console and Devtools Network tab to debug)","error")},document.body.appendChild(r),c.deinitializers.push(function(){document.body.removeChild(r)})})}function l(){u().then(function(e){e(c)});var e=d("dataUrl");c.preRun.push(function(){c.addRunDependency("dataUrl"),e.then(function(e){var t=new DataView(e.buffer,e.byteOffset,e.byteLength),r=0,n="UnityWebData1.0\0";if(!String.fromCharCode.apply(null,e.subarray(r,r+n.length))==n)throw"unknown data format";r+=n.length;var o=t.getUint32(r,!0);for(r+=4;r0;u=l,l=d.indexOf("/",u)+1)c.FS_createPath(d.substring(0,u),d.substring(u,l-1),!0,!0);c.FS_createDataFile(d,null,e.subarray(a,a+s),!0,!0,!0)}c.removeRunDependency("dataUrl")})})}r=r||function(){};var c={canvas:e,webglContextAttributes:{preserveDrawingBuffer:!1},cacheControl:function(e){return e==c.dataUrl?"must-revalidate":"no-store"},streamingAssetsUrl:"StreamingAssets",downloadProgress:{},deinitializers:[],intervals:{},setInterval:function(e,t){var r=window.setInterval(e,t);return this.intervals[r]=!0,r},clearInterval:function(e){delete this.intervals[e],window.clearInterval(e)},preRun:[],postRun:[],print:function(e){console.log(e)},printErr:function(e){console.error(e),"string"==typeof e&&e.indexOf("wasm streaming compile failed")!=-1&&(e.toLowerCase().indexOf("mime")!=-1?n('HTTP Response Header "Content-Type" configured incorrectly on the server for file '+c.codeUrl+' , should be "application/wasm". Startup time performance will suffer.',"warning"):n('WebAssembly streaming compilation failed! This can happen for example if "Content-Encoding" HTTP header is incorrectly enabled on the server for file '+c.codeUrl+", but the file is not pre-compressed on disk (or vice versa). Check the Network tab in browser Devtools to debug server header configuration.","warning"))},locateFile:function(e){return"build.wasm"==e?this.codeUrl:e},disabledCanvasEvents:["contextmenu","dragstart"]};for(var h in t)c[h]=t[h];c.streamingAssetsUrl=new URL(c.streamingAssetsUrl,document.URL).href;var f=c.disabledCanvasEvents.slice();f.forEach(function(t){e.addEventListener(t,a)}),window.addEventListener("error",o),window.addEventListener("unhandledrejection",o),c.deinitializers.push(function(){c.disableAccessToMediaDevices(),f.forEach(function(t){e.removeEventListener(t,a)}),window.removeEventListener("error",o),window.removeEventListener("unhandledrejection",o);for(var t in c.intervals)window.clearInterval(t);c.intervals={}}),c.QuitCleanup=function(){for(var e=0;e=200&&this.status<=299}.bind(this)})}function o(e,t,r,n,o){var a={url:e,version:d.version,company:t,product:r,updated:n,revalidated:n,accessed:n,response:{headers:{}}};return o&&(o.headers.forEach(function(e,t){a.response.headers[t]=e}),["redirected","status","statusText","type","url"].forEach(function(e){a.response[e]=o[e]}),a.response.parsedBody=o.parsedBody),a}function a(e,t){return(!t||!t.method||"GET"===t.method)&&((!t||["must-revalidate","immutable"].indexOf(t.control)!=-1)&&!!e.match("^https?://"))}function s(s,l){function c(t,r){return u(t,r).then(function(t){return!g.enabled||g.revalidated?t:304===t.status?(g.result.revalidated=g.result.accessed,g.revalidated=!0,f.storeRequest(g.result).then(function(){e("'"+g.result.url+"' successfully revalidated and served from the indexedDB cache")}).catch(function(t){e("'"+g.result.url+"' successfully revalidated but not stored in the indexedDB cache due to the error: "+t)}),new n(g.result.response)):(200==t.status?(g.result=o(t.url,g.company,g.product,g.accessed,t),g.revalidated=!0,f.storeRequest(g.result).then(function(){e("'"+g.result.url+"' successfully downloaded and stored in the indexedDB cache")}).catch(function(t){e("'"+g.result.url+"' successfully downloaded but not stored in the indexedDB cache due to the error: "+t)})):e("'"+g.result.url+"' request failed with status: "+t.status+" "+t.statusText),t)})}function h(e){l&&l.onProgress&&(l.onProgress({type:"progress",total:e.parsedBody.length,loaded:e.parsedBody.length,lengthComputable:!0}),l.onProgress({type:"load",total:e.parsedBody.length,loaded:e.parsedBody.length,lengthComputable:!0}))}var f=i.getInstance(),p=t("string"==typeof s?s:s.url),g={enabled:a(p,l)};return l&&(g.control=l.control,g.company=l.company,g.product=l.product),g.result=o(p,g.company,g.product,Date.now()),g.revalidated=!1,g.enabled?f.loadRequest(g.result.url).then(function(t){if(!t||t.version!==d.version)return c(s,l);g.result=t,g.result.accessed=Date.now();var o=new n(g.result.response);if("immutable"==g.control)return g.revalidated=!0,f.storeRequest(g.result),e("'"+g.result.url+"' served from the indexedDB cache without revalidation"),h(o),o;if(r(g.result.url)&&(o.headers.get("Last-Modified")||o.headers.get("ETag")))return fetch(g.result.url,{method:"HEAD"}).then(function(t){return g.revalidated=["Last-Modified","ETag"].every(function(e){return!o.headers.get(e)||o.headers.get(e)==t.headers.get(e)}),g.revalidated?(g.result.revalidated=g.result.accessed,f.storeRequest(g.result),e("'"+g.result.url+"' successfully revalidated and served from the indexedDB cache"),h(o),o):c(s,l)});l=l||{};var a=l.headers||{};return l.headers=a,o.headers.get("Last-Modified")?(a["If-Modified-Since"]=o.headers.get("Last-Modified"),a["Cache-Control"]="no-cache"):o.headers.get("ETag")&&(a["If-None-Match"]=o.headers.get("ETag"),a["Cache-Control"]="no-cache"),c(s,l)}).catch(function(t){return e("Failed to load '"+g.result.url+"' from indexedDB cache due to the error: "+t),u(s,l)}):u(s,l)}var i=c.UnityCache,d=i.RequestStore,u=c.fetchWithProgress;return n.prototype.arrayBuffer=function(){return Promise.resolve(this.parsedBody.buffer)},n.prototype.blob=function(){return this.arrayBuffer().then(function(e){return new Blob([e])})},n.prototype.json=function(){return this.text().then(function(e){return JSON.parse(e)})},n.prototype.text=function(){var e=new TextDecoder;return Promise.resolve(e.decode(this.parsedBody))},s}(),new Promise(function(e,t){c.SystemInfo.hasWebGL?1==c.SystemInfo.hasWebGL?t('Your browser does not support graphics API "WebGL 2" which is required for this content.'):c.SystemInfo.hasWasm?(1==c.SystemInfo.hasWebGL&&c.print('Warning: Your browser does not support "WebGL 2" Graphics API, switching to "WebGL 1"'),c.startupErrorHandler=t,r(0),c.postRun.push(function(){r(1),delete c.startupErrorHandler,e(m)}),l()):t("Your browser does not support WebAssembly."):t("Your browser does not support WebGL.")})} \ No newline at end of file diff --git a/spaces/eIysia/VITS-Umamusume-voice-synthesizer/data_utils.py b/spaces/eIysia/VITS-Umamusume-voice-synthesizer/data_utils.py deleted file mode 100644 index e9246c6c8f2ff3c37a7f8529ea1593c7f80f887e..0000000000000000000000000000000000000000 --- a/spaces/eIysia/VITS-Umamusume-voice-synthesizer/data_utils.py +++ /dev/null @@ -1,393 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data - -import commons -from mel_processing import spectrogram_torch -from utils import load_wav_to_torch, load_filepaths_and_text -from text import text_to_sequence, cleaned_text_to_sequence - - -class TextAudioLoader(torch.utils.data.Dataset): - """ - 1) loads audio, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - def __init__(self, audiopaths_and_text, hparams): - self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text) - self.text_cleaners = hparams.text_cleaners - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 190) - - random.seed(1234) - random.shuffle(self.audiopaths_and_text) - self._filter() - - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_and_text_new = [] - lengths = [] - for audiopath, text in self.audiopaths_and_text: - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_and_text_new.append([audiopath, text]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - self.audiopaths_and_text = audiopaths_and_text_new - self.lengths = lengths - - def get_audio_text_pair(self, audiopath_and_text): - # separate filename and text - audiopath, text = audiopath_and_text[0], audiopath_and_text[1] - text = self.get_text(text) - spec, wav = self.get_audio(audiopath) - return (text, spec, wav) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text): - if self.cleaned_text: - text_norm = cleaned_text_to_sequence(text) - else: - text_norm = text_to_sequence(text, self.text_cleaners) - if self.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - - def __getitem__(self, index): - return self.get_audio_text_pair(self.audiopaths_and_text[index]) - - def __len__(self): - return len(self.audiopaths_and_text) - - -class TextAudioCollate(): - """ Zero-pads model inputs and targets - """ - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text and aduio - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - if self.return_ids: - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, ids_sorted_decreasing - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths - - -"""Multi speaker version""" -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - def __init__(self, audiopaths_sid_text, hparams): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.text_cleaners = hparams.text_cleaners - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 190) - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - for audiopath, sid, text in self.audiopaths_sid_text: - audiopath = "E:/uma_voice/" + audiopath - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_sid_text_new.append([audiopath, sid, text]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, text = audiopath_sid_text[0], audiopath_sid_text[1], audiopath_sid_text[2] - text = self.get_text(text) - spec, wav = self.get_audio(audiopath) - sid = self.get_sid(sid) - return (text, spec, wav, sid) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text): - if self.cleaned_text: - text_norm = cleaned_text_to_sequence(text) - else: - text_norm = text_to_sequence(text, self.text_cleaners) - if self.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets - """ - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - sid[i] = row[3] - - if self.return_ids: - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, ids_sorted_decreasing - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i+1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j*self.batch_size:(j+1)*self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid+1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/eaedk/Tuto_Sentiment_Analysis_App/README.md b/spaces/eaedk/Tuto_Sentiment_Analysis_App/README.md deleted file mode 100644 index 88cfb45fdacf2c74ea4c9afc85e8a0c38911de3b..0000000000000000000000000000000000000000 --- a/spaces/eaedk/Tuto_Sentiment_Analysis_App/README.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: Tuto Sentiment Analysis App -emoji: 🔥 -pinned: False -sdk: gradio -sdk_version: 3.28.0 -app_file: app.py ---- -# Tuto Sentiment Analysis App -This Sentiment Analysis App is a tweets classifer system based on a [Hugginface pretrained model (DistillBERT)](https://huggingface.co/docs/transformers/model_doc/distilbert), [finetuned](https://huggingface.co/GhylB/Sentiment_Analysis_DistilBERT) by one of my brilliant trainees [Mr. Gilbert Botchway](https://www.linkedin.com/in/gilbert-botchway/) on the [Zindi Covid-19 tweets classification dataset](https://zindi.africa/competitions/covid-19-tweet-classification) - -## Setup -### Direct Execution -Please follow the instructions below to run the app. -`commands will be added soon` -### Docker -Please follow the instructions below to run the app. -`commands will be added soon` \ No newline at end of file diff --git a/spaces/ebgoldstein/FRF_Heavies/README.md b/spaces/ebgoldstein/FRF_Heavies/README.md deleted file mode 100644 index 209b943ee41776af918de134bcbffe48a804d32f..0000000000000000000000000000000000000000 --- a/spaces/ebgoldstein/FRF_Heavies/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: FRF Heavy Minerals -emoji: 📉 -colorFrom: red -colorTo: gray -sdk: gradio -app_file: app.py -pinned: false -duplicated_from: ebgoldstein/FRFArgus ---- diff --git a/spaces/enochianborg/stable-diffusion-webui-vorstcavry/README.md b/spaces/enochianborg/stable-diffusion-webui-vorstcavry/README.md deleted file mode 100644 index 83cefef74be556c587f01c9c050ae1931080026f..0000000000000000000000000000000000000000 --- a/spaces/enochianborg/stable-diffusion-webui-vorstcavry/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Stable Diffusion Webui -emoji: 🚀 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.44.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/eson/tokenizer-arena/vocab/icetk/README.md b/spaces/eson/tokenizer-arena/vocab/icetk/README.md deleted file mode 100644 index f26f84a0d10b9a853b8a162b5be6d02394432d96..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/icetk/README.md +++ /dev/null @@ -1,72 +0,0 @@ - - -## 简介 - -``` -num_image_tokens = 20000 image_tokenizer -num_text_tokens = 130000 text_tokenizer -``` - -一共 150000 - -## text_tokenizer - -``` -tokenizer.sp_tokenizer.text_tokenizer.proto.pieces[0] 对应 20000+0 -piece: "" -score: 0.0 -type: UNKNOWN - -tokenizer.sp_tokenizer.text_tokenizer.proto.pieces[1] 对应 20000+1 -piece: "" -score: 0.0 -type: CONTROL - -tokenizer.sp_tokenizer.text_tokenizer.proto.pieces[2] 对应 20000+2 -piece: "" -score: 0.0 -type: CONTROL - -tokenizer.sp_tokenizer.text_tokenizer.proto.pieces[3] 对应 20000+3 -piece: "" -score: 0.0 -type: CONTROL - -tokenizer.sp_tokenizer.text_tokenizer.proto.pieces[4] 对应 20000+4 -piece: "" -score: 0.0 -type: USER_DEFINED - -tokenizer.sp_tokenizer.text_tokenizer.proto.pieces[5] 对应 20000+5 -piece: "\342\226\201" -score: -2.6171817779541016 -type: NORMAL - -tokenizer.sp_tokenizer.text_tokenizer.proto.pieces[6] -piece: "," -score: -3.151700019836426 -type: NORMAL - - -tokenizer.sp_tokenizer.text_tokenizer.proto.pieces[50] -piece: "{" -score: -7.532660961151123 -type: NORMAL - -tokenizer.sp_tokenizer.text_tokenizer.proto.pieces[100] -piece: "\342\226\201the" # "\342\226\201" 这是啥?? -score: -3.922896385192871 -type: NORMAL - -tokenizer.sp_tokenizer.text_tokenizer.proto.pieces[200] -piece: "\342\226\201This" -score: -7.821105480194092 -type: NORMAL - - -tokenizer.sp_tokenizer.text_tokenizer.proto.pieces[128293] -piece: "\342\226\201pa\303\255ses" -score: -14.182646751403809 -type: NORMAL -``` - diff --git a/spaces/ewgewgewg/IndexingAlpha/app.py b/spaces/ewgewgewg/IndexingAlpha/app.py deleted file mode 100644 index 21807dd968c7e8ac0cbaa32e31437fcdf15066ff..0000000000000000000000000000000000000000 --- a/spaces/ewgewgewg/IndexingAlpha/app.py +++ /dev/null @@ -1,80 +0,0 @@ -# GNU -import gradio as gr -from generate import generate - -demo = gr.Blocks() - -def attempted_items_changer(attempted_items_input): - if (not attempted_items_input.isdigit()): - return { - attempted_items: 50 - } - return { - attempted_items: max(int(attempted_items_input), 0) - } - -def offset_changer(offset_input): - if(not offset_input.isdigit() and not (offset_input[0] == '-' and offset_input[1:].isdigit())): - return { - offset: 0 - } - return { - offset: int(offset_input) - } - -def custom_changer (custom_input): - return { - custom: custom_input - } - -with demo: - - attempted_items = gr.State(50) - offset = gr.State(0) - custom = gr.State("") - - gr.Markdown("# PDF to Index") - - with gr.Column(): - - gr.Markdown("### Load Inputs") - - uploaded_file = gr.File( - label="Upload a PDF file", - file_count="single", - type="file" - ) - - with gr.Row(): - attempted_items_input = gr.Textbox(value="50", show_label=True, label="Attempted Generated Items") - offset_input = gr.Textbox(value="0", show_label=True, label="Page Offset") - attempted_items_input.change(attempted_items_changer, [attempted_items_input], [attempted_items]) - offset_input.change(offset_changer, [offset_input], [offset]) - - gr.HTML("

            Attempted Generated Items is the number of terms intended to be automatically generated for index (output may be slightly lower), while Page Offset is a value added to each page number found in the file. In the case of invalid values, Attempted Items will default to 50 and Page Offset will default to 0. If the fields do not produce expected values, you may be clicking too quickly -- please adjust the field, wait, and try again.

            ") - - with gr.Row(): - custom_input = gr.Textbox(value="", show_label=True, label="Custom") - custom_input.change(custom_changer, [custom_input], [custom]) - - gr.HTML("

            You can add semicolon-separated values in Custom to add custom fields to index. Optionally, you can comma-separate terms between semicolons if you want multiple terms to contribute to a single index entry -- the first term will be the label for the index entry. If Custom does not produce expected values, you may be clicking too quickly -- please adjust the field, wait, and try again.

            ") - - - gr.Markdown("---") - - with gr.Column(): - gr.Markdown("### Index From PDF") - convert_button = gr.Button("Generate Index From PDF", variant="primary") - out_placeholder = gr.HTML('

            Output will appear below, with PyPDF2 for preprocessing and yake for processing:

            ') - gr.Markdown("### Index") - index = gr.Textbox( - label="Index", placeholder="The index will appear here" - ) - - convert_button.click( - fn=generate, - inputs=[uploaded_file, attempted_items, offset, custom], - outputs=[index], - ) - -demo.launch() \ No newline at end of file diff --git a/spaces/f2api/gpt-academic/request_llm/bridge_chatglm.py b/spaces/f2api/gpt-academic/request_llm/bridge_chatglm.py deleted file mode 100644 index 100783d248c4cd6dcbdb091181ac21f0f66af670..0000000000000000000000000000000000000000 --- a/spaces/f2api/gpt-academic/request_llm/bridge_chatglm.py +++ /dev/null @@ -1,161 +0,0 @@ - -from transformers import AutoModel, AutoTokenizer -import time -import threading -import importlib -from toolbox import update_ui, get_conf -from multiprocessing import Process, Pipe - -load_message = "ChatGLM尚未加载,加载需要一段时间。注意,取决于`config.py`的配置,ChatGLM消耗大量的内存(CPU)或显存(GPU),也许会导致低配计算机卡死 ……" - -################################################################################# -class GetGLMHandle(Process): - def __init__(self): - super().__init__(daemon=True) - self.parent, self.child = Pipe() - self.chatglm_model = None - self.chatglm_tokenizer = None - self.info = "" - self.success = True - self.check_dependency() - self.start() - self.threadLock = threading.Lock() - - def check_dependency(self): - try: - import sentencepiece - self.info = "依赖检测通过" - self.success = True - except: - self.info = "缺少ChatGLM的依赖,如果要使用ChatGLM,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_chatglm.txt`安装ChatGLM的依赖。" - self.success = False - - def ready(self): - return self.chatglm_model is not None - - def run(self): - # 子进程执行 - # 第一次运行,加载参数 - retry = 0 - while True: - try: - if self.chatglm_model is None: - self.chatglm_tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) - device, = get_conf('LOCAL_MODEL_DEVICE') - if device=='cpu': - self.chatglm_model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).float() - else: - self.chatglm_model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda() - self.chatglm_model = self.chatglm_model.eval() - break - else: - break - except: - retry += 1 - if retry > 3: - self.child.send('[Local Message] Call ChatGLM fail 不能正常加载ChatGLM的参数。') - raise RuntimeError("不能正常加载ChatGLM的参数!") - - while True: - # 进入任务等待状态 - kwargs = self.child.recv() - # 收到消息,开始请求 - try: - for response, history in self.chatglm_model.stream_chat(self.chatglm_tokenizer, **kwargs): - self.child.send(response) - # # 中途接收可能的终止指令(如果有的话) - # if self.child.poll(): - # command = self.child.recv() - # if command == '[Terminate]': break - except: - from toolbox import trimmed_format_exc - self.child.send('[Local Message] Call ChatGLM fail.' + '\n```\n' + trimmed_format_exc() + '\n```\n') - # 请求处理结束,开始下一个循环 - self.child.send('[Finish]') - - def stream_chat(self, **kwargs): - # 主进程执行 - self.threadLock.acquire() - self.parent.send(kwargs) - while True: - res = self.parent.recv() - if res != '[Finish]': - yield res - else: - break - self.threadLock.release() - -global glm_handle -glm_handle = None -################################################################################# -def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): - """ - 多线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - global glm_handle - if glm_handle is None: - glm_handle = GetGLMHandle() - if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + glm_handle.info - if not glm_handle.success: - error = glm_handle.info - glm_handle = None - raise RuntimeError(error) - - # chatglm 没有 sys_prompt 接口,因此把prompt加入 history - history_feedin = [] - history_feedin.append(["What can I do?", sys_prompt]) - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可 - response = "" - for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - if len(observe_window) >= 1: observe_window[0] = response - if len(observe_window) >= 2: - if (time.time()-observe_window[1]) > watch_dog_patience: - raise RuntimeError("程序终止。") - return response - - - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 单线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - chatbot.append((inputs, "")) - - global glm_handle - if glm_handle is None: - glm_handle = GetGLMHandle() - chatbot[-1] = (inputs, load_message + "\n\n" + glm_handle.info) - yield from update_ui(chatbot=chatbot, history=[]) - if not glm_handle.success: - glm_handle = None - return - - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] - - # 处理历史信息 - history_feedin = [] - history_feedin.append(["What can I do?", system_prompt] ) - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - # 开始接收chatglm的回复 - response = "[Local Message]: 等待ChatGLM响应中 ..." - for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - chatbot[-1] = (inputs, response) - yield from update_ui(chatbot=chatbot, history=history) - - # 总结输出 - if response == "[Local Message]: 等待ChatGLM响应中 ...": - response = "[Local Message]: ChatGLM响应异常 ..." - history.extend([inputs, response]) - yield from update_ui(chatbot=chatbot, history=history) diff --git a/spaces/facebook/StyleNeRF/viz/trunc_noise_widget.py b/spaces/facebook/StyleNeRF/viz/trunc_noise_widget.py deleted file mode 100644 index dda852b159bd8f2864fe6f6b87de9677e3e41625..0000000000000000000000000000000000000000 --- a/spaces/facebook/StyleNeRF/viz/trunc_noise_widget.py +++ /dev/null @@ -1,75 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import imgui -from gui_utils import imgui_utils - -#---------------------------------------------------------------------------- - -class TruncationNoiseWidget: - def __init__(self, viz): - self.viz = viz - self.prev_num_ws = 0 - self.trunc_psi = 1 - self.trunc_cutoff = 0 - self.noise_enable = True - self.noise_seed = 0 - self.noise_anim = False - - @imgui_utils.scoped_by_object_id - def __call__(self, show=True): - viz = self.viz - num_ws = viz.result.get('num_ws', 0) - has_noise = viz.result.get('has_noise', False) - if num_ws > 0 and num_ws != self.prev_num_ws: - if self.trunc_cutoff > num_ws or self.trunc_cutoff == self.prev_num_ws: - self.trunc_cutoff = num_ws - self.prev_num_ws = num_ws - - if show: - imgui.text('Truncate') - imgui.same_line(viz.label_w) - with imgui_utils.item_width(viz.font_size * 10), imgui_utils.grayed_out(num_ws == 0): - _changed, self.trunc_psi = imgui.slider_float('##psi', self.trunc_psi, -1, 2, format='Psi %.2f') - imgui.same_line() - if num_ws == 0: - imgui_utils.button('Cutoff 0', width=(viz.font_size * 8 + viz.spacing), enabled=False) - else: - with imgui_utils.item_width(viz.font_size * 8 + viz.spacing): - changed, new_cutoff = imgui.slider_int('##cutoff', self.trunc_cutoff, 0, num_ws, format='Cutoff %d') - if changed: - self.trunc_cutoff = min(max(new_cutoff, 0), num_ws) - - with imgui_utils.grayed_out(not has_noise): - imgui.same_line() - _clicked, self.noise_enable = imgui.checkbox('Noise##enable', self.noise_enable) - imgui.same_line(round(viz.font_size * 27.7)) - with imgui_utils.grayed_out(not self.noise_enable): - with imgui_utils.item_width(-1 - viz.button_w - viz.spacing - viz.font_size * 4): - _changed, self.noise_seed = imgui.input_int('##seed', self.noise_seed) - imgui.same_line(spacing=0) - _clicked, self.noise_anim = imgui.checkbox('Anim##noise', self.noise_anim) - - is_def_trunc = (self.trunc_psi == 1 and self.trunc_cutoff == num_ws) - is_def_noise = (self.noise_enable and self.noise_seed == 0 and not self.noise_anim) - with imgui_utils.grayed_out(is_def_trunc and not has_noise): - imgui.same_line(imgui.get_content_region_max()[0] - 1 - viz.button_w) - if imgui_utils.button('Reset', width=-1, enabled=(not is_def_trunc or not is_def_noise)): - self.prev_num_ws = num_ws - self.trunc_psi = 1 - self.trunc_cutoff = num_ws - self.noise_enable = True - self.noise_seed = 0 - self.noise_anim = False - - if self.noise_anim: - self.noise_seed += 1 - viz.args.update(trunc_psi=self.trunc_psi, trunc_cutoff=self.trunc_cutoff, random_seed=self.noise_seed) - viz.args.noise_mode = ('none' if not self.noise_enable else 'const' if self.noise_seed == 0 else 'random') - -#---------------------------------------------------------------------------- diff --git a/spaces/falterWliame/Face_Mask_Detection/Al Azkar Imam Nawawi Pdf Download WORK.md b/spaces/falterWliame/Face_Mask_Detection/Al Azkar Imam Nawawi Pdf Download WORK.md deleted file mode 100644 index 27f3fc77df9eff6969053db0b6b18ec03438cfb8..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Al Azkar Imam Nawawi Pdf Download WORK.md +++ /dev/null @@ -1,45 +0,0 @@ -
            -

            Al Azkar Imam Nawawi PDF Download: A Free and Authentic Source of Islamic Supplications and Remembrances

            - -

            If you are looking for a reliable and authentic source of Islamic supplications and remembrances, you might want to check out al azkar imam nawawi pdf download. This is a PDF file that contains the book Kitab al-Adhkar by Imam Yahya ibn Sharaf an-Nawawi, a famous scholar and jurist of the Shafi'i school of Islamic law.

            -

            al azkar imam nawawi pdf download


            Download File ---> https://urlca.com/2uDc77



            - -

            Al azkar imam nawawi pdf download is a comprehensive collection of supplications and remembrances that are related from the Prophet Muhammad (peace be upon him) and his companions. The book covers various topics, such as the daily prayers, the morning and evening remembrances, the virtues of reciting the Quran, the supplications for different occasions and situations, the remembrances for the night and day, the etiquettes of sleeping and waking up, the supplications for protection and healing, and the remembrances for death and the hereafter.

            - -

            The book is written in a clear and concise manner, with references to the sources of each supplication and remembrance. The book also includes explanatory notes and comments by Imam an-Nawawi on some of the supplications and remembrances. The book is divided into 17 chapters, each of which covers a different category of supplications and remembrances.

            - -

            Why You Should Download Al Azkar Imam Nawawi PDF

            - -

            There are many reasons why you should download al azkar imam nawawi pdf if you are interested in Islamic supplications and remembrances. Here are some of them:

            - -
              -
            • Al azkar imam nawawi pdf is a free download that you can access anytime and anywhere. You don't need to pay anything or sign up for anything to get this valuable resource.
            • -
            • Al azkar imam nawawi pdf is a PDF file that you can easily read on your computer, tablet, or smartphone. You can also print it out if you prefer a hard copy.
            • -
            • Al azkar imam nawawi pdf is a comprehensive and authentic collection of supplications and remembrances that are based on the Quran and Sunnah. It is very useful for all Muslims who want to increase their connection with Allah and seek His blessings and mercy.
            • -
            • Al azkar imam nawawi pdf is written by an eminent scholar who has a deep knowledge and understanding of Islamic sciences. He has authored several books on various topics of Islamic law, theology, hadith, ethics, and spirituality.
            • -
            • Al azkar imam nawawi pdf is a user-friendly book that has a clear and concise style, with references to the sources of each supplication and remembrance. It also has explanatory notes and comments by Imam an-Nawawi on some of the supplications and remembrances.
            • -
            - -

            How to Download Al Azkar Imam Nawawi PDF

            - -

            If you want to download al azkar imam nawawi pdf, you can follow these simple steps:

            - -
              -
            1. Click on one of the links below to go to the download page.
            2. -
            3. Click on the download button to start downloading the PDF file.
            4. -
            5. Save the file on your device or open it with your preferred PDF reader.
            6. -
            7. Enjoy reading Kitab al-Adhkar by Imam Yahya ibn Sharaf an-Nawawi.
            8. -
            - -

            Download Al Azkar Imam Nawawi PDF from Archive.org

            -

            Download Al Azkar Imam Nawawi PDF from Google Drive

            -

            - -

            Conclusion

            - -

            Al azkar imam nawawi pdf is a great resource for anyone who wants to learn Islamic supplications and remembrances. It is a comprehensive and authentic collection of supplications and remembrances that are based on the Quran and Sunnah. It is also a free download that you can access anytime and anywhere. So what are you waiting for? Download al azkar imam nawawi pdf today and enhance your spiritual life.

            -

            Conclusion

            - -

            Al azkar imam nawawi pdf is a great resource for anyone who wants to learn Islamic supplications and remembrances. It is a comprehensive and authentic collection of supplications and remembrances that are based on the Quran and Sunnah. It is also a free download that you can access anytime and anywhere. So what are you waiting for? Download al azkar imam nawawi pdf today and enhance your spiritual life.

            3cee63e6c2
            -
            -
            \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Aquaveo Gms 8 2 Cracked.md b/spaces/falterWliame/Face_Mask_Detection/Aquaveo Gms 8 2 Cracked.md deleted file mode 100644 index effa5c22f608c9e387611bf13a5b7baa592bfd5f..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Aquaveo Gms 8 2 Cracked.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Aquaveo Gms 8 2 Cracked


            Download Ziphttps://urlca.com/2uDdBE



            - -GMS 10.3.7 (32-bit), 17Jun18, 480MB. GMS 10.3.6 (32-bit), 07Jun18, 562MB. GMS 10.3.5 (32-bit), 12Jun18, 478MB. GMS 10.3.4 (32-bit), 09Jun18, 442MB. GMS 10.3.3 (32-bit), 08Jun18, 506MB. GMS 10.3.2 (32-bit), 23May18, 398MB. GMS 10.3.1 (32-bit), 07Apr18, 335MB. GMS 10.3.0 (32-bit), 06Apr18, 351MB. GMS 10.2.11 (32-bit), 28Apr18, 376MB. GMS 10.2.10 (32-bit), 24Apr18, 370MB. GMS 10.2.9 (32-bit), 15Apr18, 394MB. GMS 10.2.8 (32-bit), 13Apr18, 403MB. GMS 10.2.7 (32-bit), 03Apr18, 452MB. GMS 10.2.6 (32-bit), 10Mar18, 390MB. GMS 10.2.5 (32-bit), 05Mar18, 354MB. GMS 10.2.4 (32-bit), 03Mar18, 420MB. GMS 10.2.3 (32-bit), 27Feb18, 394MB. GMS 10.2.2 (32-bit), 26Feb18, 400MB. GMS 10.2.1 (32-bit), 22Feb18, 356MB. GMS 10.2.0 (32-bit), 22Feb18, 412MB. GMS 10.1.12 (32-bit), 13Feb18, 357MB. GMS 10.1.11 (32-bit), 11Feb18, 340MB. GMS 10.1.10 (32-bit), 08Feb18, 371MB. GMS 10.1.9 (32-bit), 25Jan18, 408MB. GMS 10.1.8 (32-bit), 21Jan18, 376MB. GMS 10.1.7 (32-bit), 12Jan18, 371MB. GMS 10.1.6 (32-bit 4fefd39f24
            -
            -
            -

            diff --git a/spaces/falterWliame/Face_Mask_Detection/Descargar Halo 3 Completo Para Pc 1 Link En Espanol !EXCLUSIVE!.md b/spaces/falterWliame/Face_Mask_Detection/Descargar Halo 3 Completo Para Pc 1 Link En Espanol !EXCLUSIVE!.md deleted file mode 100644 index 83fedf97d04e62b0869cce905e1a9e110b812fd5..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Descargar Halo 3 Completo Para Pc 1 Link En Espanol !EXCLUSIVE!.md +++ /dev/null @@ -1,102 +0,0 @@ - -

            Descargar Halo 3 Completo Para Pc 1 Link En Espanol

            - -

            Si eres fan de los juegos de acción y ciencia ficción, seguramente conoces la saga Halo, una de las más exitosas y populares de la historia de los videojuegos. Halo es una franquicia creada por Bungie Studios y publicada por Microsoft, que nos cuenta la historia del Jefe Maestro, un soldado mejorado genéticamente que debe luchar contra una invasión alienígena liderada por el Covenant, una alianza de razas extraterrestres fanáticas y hostiles.

            -

            Descargar Halo 3 Completo Para Pc 1 Link En Espanol


            Download 🌟 https://urlca.com/2uDcvF



            - -

            Entre los juegos de la saga Halo, uno de los más destacados es Halo 3, el tercer capítulo de la trilogía original, que fue lanzado en el año 2007 para la consola Xbox 360. Halo 3 nos ofrece una experiencia de juego increíble, con unos gráficos espectaculares, una banda sonora épica, una jugabilidad fluida y variada, y una historia apasionante y emocionante.

            - -

            Halo 3 nos sitúa en el año 2552, en plena guerra interestelar entre la humanidad y el Covenant. El Jefe Maestro regresa a la Tierra para defenderla de la invasión alienígena, junto con el Sargento Johnson y el Inquisidor, un antiguo líder del Covenant que se ha aliado con los humanos. El objetivo del Jefe Maestro es detener al Profeta de la Verdad, el líder supremo del Covenant, que planea activar un anillo Halo, una antigua arma de destrucción masiva creada por una civilización desaparecida llamada los Forerunners.

            - -

            Halo 3 nos ofrece una campaña para un jugador o cooperativa, donde podremos disfrutar de diferentes escenarios y situaciones, desde combates urbanos hasta batallas espaciales. También podremos conducir diversos vehículos y usar un amplio arsenal de armas humanas y alienígenas. Además, Halo 3 cuenta con un modo multijugador online, donde podremos competir o colaborar con otros jugadores en diferentes modos y mapas.

            - -

            Cómo descargar Halo 3 completo para PC en un solo link y en español

            - -

            Aunque Halo 3 fue lanzado originalmente para Xbox 360, muchos fans querían poder jugarlo también en su PC. Por eso, en el año 2020 se lanzó una versión para PC de Halo 3, como parte de la colección Halo: The Master Chief Collection, que incluye los seis juegos principales de la saga. Sin embargo, esta versión requiere comprar toda la colección y tener una cuenta de Steam para poder jugarla.

            -

            - -

            Si tú quieres descargar Halo 3 completo para PC en un solo link y en español, sin tener que pagar nada ni registrarte en ninguna plataforma, estás de suerte, porque en este artículo te vamos a mostrar cómo puedes hacerlo de forma gratuita y fácil. Solo tienes que seguir los pasos que te indicamos a continuación:

            - -
              -
            1. Lo primero que tienes que hacer es entrar en este link: https://repacksvillage.wordpress.com/2020/08/26/h4lo3/. Se trata de una página web que te ofrece la posibilidad de descargar Halo 3 completo para PC en un solo link y en español, sin necesidad de registrarte ni pagar nada.
            2. -
            3. Una vez que entres en el link, verás que hay un botón que dice "Torrent". Tienes que hacer clic en ese botón para descargar un archivo .torrent que contiene el juego. Para poder abrir ese archivo necesitas tener instalado el programa Utorrent, que puedes descargar gratis desde aquí: https://www.utorrent.com/intl/es/desktop/.
            4. -
            5. Cuando abras el archivo .torrent con Utorrent, se iniciará la descarga del juego. El tamaño del juego es de 4.96 GB, así que puede tardar un poco dependiendo de tu velocidad de internet. Una vez que se complete la descarga, tendrás el juego en tu PC.
            6. -
            7. Para instalar el juego, tienes que hacer doble clic en el archivo "setup.exe" que se encuentra dentro de la carpeta del juego. Se abrirá una ventana con el instalador del juego, donde tendrás que seguir las instrucciones que te indique. Asegúrate de elegir el idioma español cuando te lo pida.
            8. -
            9. Cuando termine la instalación, podrás jugar a Halo 3 completo para PC en español. Solo tienes que hacer clic en el icono del juego que se habrá creado en tu escritorio o en el menú de inicio. Disfruta de este fantástico juego y vive una aventura épica junto al Jefe Maestro.
            10. -
            - -

            Conclusión

            - -

            Descargar Halo 3 completo para PC en un solo link y en español es posible gracias a esta página web que te hemos mostrado. Así podrás disfrutar de uno de los mejores juegos de la saga Halo en tu ordenador, sin tener que pagar nada ni registrarte en ninguna plataforma. Solo tienes que seguir los pasos que te hemos indicado y podrás vivir una experiencia de juego increíble.

            - -

            Halo 3 es un juego que no te puedes perder si te gustan los juegos de acción y ciencia ficción. Te ofrece una campaña para un jugador o cooperativa llena de emoción y variedad, y un modo multijugador online donde podrás competir o colaborar con otros jugadores en diferentes modos y mapas. Además, cuenta con unos gráficos espectaculares, una banda sonora épica y una historia apasionante.

            - -

            No esperes más y descarga ya Halo 3 completo para PC en un solo link y en español. Te aseguramos que no te arrepentirás.

            -

            Qué requisitos necesita tu PC para jugar a Halo 3

            - -

            Para poder jugar a Halo 3 en tu PC, debes asegurarte de que tu ordenador cumple con los requisitos mínimos del juego. Estos requisitos son los siguientes:

            - -
              -
            • Sistema operativo: Windows 7 SP1 64 bits
            • -
            • Procesador: Intel Core i7-975 o AMD A12-9800 APU
            • -
            • Memoria RAM: 2 GB
            • -
            • Tarjeta gráfica: GeForce GTS 450 o Radeon R7 Graphics
            • -
            • Espacio en el disco: 4 GB
            • -
            • Direct X: DirectX 9
            • -
            - -

            Si tu PC cumple con estos requisitos, podrás jugar a Halo 3 sin problemas. Sin embargo, si quieres disfrutar de una mejor calidad gráfica y de sonido, te recomendamos que tu PC cumpla con los requisitos recomendados del juego. Estos requisitos son los siguientes:

            - -
              -
            • Sistema operativo: Windows 10 64 bits
            • -
            • Procesador: Intel Core i5-4670K o AMD Ryzen 5 1600X
            • -
            • Memoria RAM: 8 GB
            • -
            • Tarjeta gráfica: GeForce GTX 1060 o Radeon RX 480
            • -
            • Espacio en el disco: 4 GB
            • -
            • Direct X: DirectX 12
            • -
            - -

            Si tu PC cumple con estos requisitos, podrás jugar a Halo 3 con una resolución de hasta 4K y un sonido envolvente. Además, podrás ajustar las opciones gráficas y de sonido a tu gusto y a las características de tu ordenador.

            - -

            Cómo descargar e instalar Halo 3 en tu PC paso a paso

            - -

            Ahora que ya sabes qué requisitos necesita tu PC para jugar a Halo 3, te vamos a explicar cómo descargar e instalar Halo 3 en tu PC paso a paso. Para ello, solo tienes que seguir los pasos que te indicamos a continuación:

            - -
              -
            1. Entra en este link: https://repacksvillage.wordpress.com/2020/08/26/h4lo3/. Se trata de una página web que te ofrece la posibilidad de descargar Halo 3 completo para PC en un solo link y en español, sin necesidad de registrarte ni pagar nada.
            2. -
            3. Haz clic en el botón que dice "Torrent" para descargar un archivo .torrent que contiene el juego. Para poder abrir ese archivo necesitas tener instalado el programa Utorrent, que puedes descargar gratis desde aquí: https://www.utorrent.com/intl/es/desktop/.
            4. -
            5. Abre el archivo .torrent con Utorrent y se iniciará la descarga del juego. El tamaño del juego es de 4.96 GB, así que puede tardar un poco dependiendo de tu velocidad de internet. Una vez que se complete la descarga, tendrás el juego en tu PC.
            6. -
            7. Haz doble clic en el archivo "setup.exe" que se encuentra dentro de la carpeta del juego y se abrirá una ventana con el instalador del juego. Sigue las instrucciones que te indique y asegúrate de elegir el idioma español cuando te lo pida.
            8. -
            9. Cuando termine la instalación, podrás jugar a Halo 3 completo para PC en español. Solo tienes que hacer clic en el icono del juego que se habrá creado en tu escritorio o en el menú de inicio.
            10. -
            - -

            Así de fácil es descargar e instalar Halo 3 en tu PC. Ahora solo te queda disfrutar de este increíble juego y vivir una aventura épica junto al Jefe Maestro.

            -

            Qué diferencia hay entre Halo 3 y Halo 3 ODST

            - -

            Halo 3 y Halo 3 ODST son dos juegos de la saga Halo que comparten el mismo universo y el mismo motor gráfico, pero que tienen algunas diferencias importantes. Estas son algunas de las diferencias que hay entre Halo 3 y Halo 3 ODST:

            - -
              -
            • La historia. Halo 3 continúa la historia del Jefe Maestro y el Inquisidor, que luchan contra el Covenant y el Flood para salvar a la humanidad y al universo. Halo 3 ODST cuenta la historia de un grupo de soldados de élite llamados ODST (Orbital Drop Shock Troopers), que se infiltran en la ciudad de Nueva Mombasa para investigar lo que ocurrió después del ataque del Covenant.
            • -
            • El protagonista. En Halo 3 controlamos al Jefe Maestro, un supersoldado genéticamente mejorado que tiene una armadura especial y una inteligencia artificial llamada Cortana. En Halo 3 ODST controlamos al Novato, un soldado ODST que no tiene ninguna ventaja sobre los enemigos y que debe usar su visor para analizar el entorno.
            • -
            • El modo de juego. Halo 3 tiene un modo de juego más lineal y centrado en la acción, con escenarios amplios y variados, vehículos, armas y enemigos. Halo 3 ODST tiene un modo de juego más abierto y centrado en la exploración, con escenarios más oscuros y urbanos, menos vehículos, armas y enemigos.
            • -
            • El multijugador. Halo 3 tiene un multijugador online muy completo y popular, con diferentes modos y mapas, un editor de mapas llamado Forge y un modo teatro para grabar y editar las partidas. Halo 3 ODST tiene un multijugador online más limitado y menos popular, con solo dos modos: Tiroteo, que es un modo cooperativo contra oleadas de enemigos; y Cráneo Dorado, que es un modo competitivo contra otros jugadores.
            • -
            - -

            Halo 3 y Halo 3 ODST son dos juegos que tienen sus propias características y atractivos, pero que también comparten muchos elementos comunes. Si te gustan los juegos de acción y ciencia ficción, te recomendamos que los pruebes ambos.

            - -

            Cómo descargar e instalar Halo 3 ODST para PC en un solo link y en español

            - -

            Si quieres descargar e instalar Halo 3 ODST para PC en un solo link y en español, tienes varias opciones disponibles. Una de ellas es comprar la colección Halo: The Master Chief Collection en Steam, que incluye los seis juegos principales de la saga por un precio razonable. Otra opción es descargar Halo 3 ODST para PC en un solo link y en español desde alguna página web que te lo ofrezca gratis y sin complicaciones.

            - -

            En este artículo te hemos mostrado una página web que te permite descargar Halo 3 ODST para PC en un solo link y en español de forma gratuita y fácil. Solo tienes que seguir los pasos que te hemos indicado anteriormente y podrás disfrutar de este increíble juego en tu ordenador.

            - -

            No obstante, debes tener en cuenta que al descargar Halo 3 ODST para PC en un solo link y en español desde una página web no oficial, puedes estar infringiendo los derechos de autor del juego y exponiéndote a posibles virus o malware. Por eso, te recomendamos que descargues el juego desde una fuente segura y confiable.

            -

            Conclusión

            - -

            En este artículo te hemos mostrado cómo descargar Halo 3 completo para PC en un solo link y en español, así como Halo 3 ODST para PC en un solo link y en español. Te hemos explicado qué ofrece Halo 3 como juego de PC, qué requisitos necesita tu PC para jugar a Halo 3, cómo descargar e instalar Halo 3 en tu PC paso a paso, qué ventajas tiene descargar Halo 3 completo para PC en un solo link y en español, cómo solucionar posibles problemas al descargar Halo 3 completo para PC en un solo link y en español y qué opinan los usuarios de Halo 3 para PC. También te hemos explicado qué diferencia hay entre Halo 3 y Halo 3 ODST, y cómo descargar e instalar Halo 3 ODST en tu PC paso a paso.

            - -

            Esperamos que este artículo te haya sido útil y que hayas podido disfrutar de uno de los mejores juegos de la saga Halo en tu ordenador. Si te ha gustado este artículo, compártelo con tus amigos o con otros fans de Halo. También puedes dejarnos un comentario con tu opinión sobre el juego o sobre cómo descargarlo e instalarlo.

            - -

            Gracias por leer este artículo y hasta la próxima.

            3cee63e6c2
            -
            -
            \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Gunbound Season 3 Aimbot Download !!LINK!!.md b/spaces/falterWliame/Face_Mask_Detection/Gunbound Season 3 Aimbot Download !!LINK!!.md deleted file mode 100644 index 650ee830da9e3918a7d95e9d39b8a01ef336d24b..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Gunbound Season 3 Aimbot Download !!LINK!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

            gunbound season 3 aimbot download


            Download Zip » https://urlca.com/2uDcQK



            - -gunbound 3 download gunbound 3d model error 310 gunbound gunbound season 3 hacks gunbound season 3 aimbot gunboundm eraser 4d29de3e1b
            -
            -
            -

            diff --git a/spaces/falterWliame/Face_Mask_Detection/Hindi Medium Full HOT Movie 1080p Download Torrent.md b/spaces/falterWliame/Face_Mask_Detection/Hindi Medium Full HOT Movie 1080p Download Torrent.md deleted file mode 100644 index ef30c2124cd8d586d6f8c0ba1cfe54688f42ff8c..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Hindi Medium Full HOT Movie 1080p Download Torrent.md +++ /dev/null @@ -1,32 +0,0 @@ -

            Hindi Medium Full Movie 1080p Download Torrent


            Download Filehttps://urlca.com/2uDdze



            - -Hindi Medium is released in. The latest Hindi Medium released in Desi Cinemas. VFX: Marakkaji Music: Kami and Nava. The minimum available update option is Release 2, so you can select any of the below update option: 2. Now you can see all the action of the movie in the HD quality. Free Download Hindi Medium on Dailymotion. Watch Hindi Medium movie online on Dailymotion. Produced by Raj Batra. It was Released on June 29, 2018 in India. Watch Hindi Medium movie online on Dailymotion.. Hindi Medium - Desi Cinemas (TV Series Online) - Wikipedia. Hindi Medium - Wikipedia. Hindi Medium movie trailer - 2.0.0 Hindi Medium TV-series released on, on Desi Cinemas.Q: - -How do I clear the Search Bar and View? - -I have this code: - -- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event - - - - [self.searchDisplayController endSearch]; - - [self.searchDisplayController setActive:NO animated:NO]; - - [self.searchDisplayController setSearchBar:nil]; - - - -It clears the search bar, but the View is still there. How do I clear that too? - -A: - -Make sure your searchDisplayController is connected to the searchBar. You can do this by unchecking the searchDisplayController in your nib file. - -In the past, it has been known to develop a series of interconnects between a radiation source and a converter for generating an electrical signal representative of the energy of the beam of radiation emitted by the source. In many applications, such interconnects are used as an output for a radiation detector. The output is responsive to the radiation intensity, the electrical signal being converted to a rate of change of beam intensity representative of the energy of the beam. Such a device is shown in U.S. Pat. No. 3,886,333, issued to Burghart et al. on May 27, 1975. The present invention is an improvement on the device shown in that patent, and provides a simple and efficient system for conversion of a radiation beam having non-linear characteristics to an electrical signal, the system being particularly adaptable for use with a pulsed radiation source and a storage device for such a source. - -The Burghart et al. system has been employed in 4fefd39f24
            -
            -
            -

            diff --git a/spaces/farkmu45/instagram-clothes-psychology-streamlit/Processor.py b/spaces/farkmu45/instagram-clothes-psychology-streamlit/Processor.py deleted file mode 100644 index 5b820368e2bb63b82c4ebbdadca162fe0495098c..0000000000000000000000000000000000000000 --- a/spaces/farkmu45/instagram-clothes-psychology-streamlit/Processor.py +++ /dev/null @@ -1,28 +0,0 @@ -import PIL.Image as Image -import torch -from fastai.vision.all import Learner -from numpy.typing import NDArray -from typing import List - -class Processor(): - def __init__(self, learn: Learner): - self.inference = learn - self.__model = torch.hub.load( - 'ultralytics/yolov5', 'yolov5x6', trust_repo=True) - - def classify_image(self, images: NDArray) -> str: - return self.inference.predict(images) - - def filter_image(self, image: Image) -> bool: - results = self.__model(image) - results = results.pandas().xyxy[0] - person_detected = 0 - - for name in results['name']: - if name == 'person': - person_detected += 1 - - if person_detected == 0 or person_detected > 1: - return False - - return True diff --git a/spaces/fatiXbelha/sd/Badminton League Unlimited Money APK How to Get It for Free.md b/spaces/fatiXbelha/sd/Badminton League Unlimited Money APK How to Get It for Free.md deleted file mode 100644 index 6933feeb38e16451a7739bad7ed2ad5b79d9b3ed..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Badminton League Unlimited Money APK How to Get It for Free.md +++ /dev/null @@ -1,77 +0,0 @@ - -

            Badminton League APK Unlimited Money: How to Get It and What to Do With It

            -

            Badminton is a popular sport that can be played by anyone, anywhere. But if you want to take your game to the next level, you might want to try badminton league apk, a mobile game that lets you compete with other players online. In this game, you can customize your character, choose your racket, and play in various modes and tournaments. You can also earn coins and gems, which are the in-game currencies that you can use to buy items and upgrade your skills.

            -

            badminton league apk unlimited money


            Download File ✯✯✯ https://urllie.com/2uNzxS



            -

            However, earning coins and gems can be slow and tedious, especially if you want to unlock all the features and items in the game. That's why some players look for ways to get unlimited money in badminton league apk. Unlimited money means having unlimited coins and gems, which can give you an edge over your opponents and make the game more fun and exciting. But how can you get unlimited money in badminton league apk? There are two main methods: using modded versions or cheats.

            -

            Modded Versions

            -

            A modded version is a modified version of the original game that has been altered by someone to change some aspects of the game. For example, a modded version of badminton league apk might have unlimited coins and gems already available for you to use. This way, you don't have to earn them by playing the game.

            -

            There are many websites that offer modded versions of badminton league apk for free download. However, you should be careful when downloading modded versions from unknown sources, as they might contain viruses or malware that can harm your device or steal your personal information. You should also check the reviews and ratings of the modded versions before downloading them, as some of them might not work properly or have bugs.

            -

            Cheats

            -

            Cheats are codes or commands that you can enter in the game to activate certain effects or functions. For example, a cheat for badminton league apk might give you unlimited coins and gems without having to download a modded version. You can find cheats for badminton league apk online, either on websites or videos that show you how to use them.

            -

            badminton league mod apk download free money
            -badminton league hack apk unlimited coins and gems
            -badminton league cheats apk no root money
            -badminton league game apk mod money unlocked
            -badminton league premium apk unlimited cash
            -badminton league pro apk hack money and energy
            -badminton league cracked apk free money and gold
            -badminton league latest apk mod money and items
            -badminton league online apk unlimited money and diamonds
            -badminton league 3d apk hack money and tickets
            -badminton league android apk mod money and characters
            -badminton league offline apk unlimited money and skills
            -badminton league full apk free money and costumes
            -badminton league update apk mod money and weapons
            -badminton league 2023 apk unlimited money and levels
            -badminton league best apk hack money and power-ups
            -badminton league new apk free money and equipment
            -badminton league 2 apk mod money and features
            -badminton league fun apk unlimited money and modes
            -badminton league super apk hack money and rewards
            -badminton league mega apk free money and bonuses
            -badminton league vip apk mod money and extras
            -badminton league easy apk unlimited money and upgrades
            -badminton league cool apk hack money and accessories
            -badminton league awesome apk free money and prizes
            -badminton league realistic apk mod money and physics
            -badminton league ultimate apk unlimited money and challenges
            -badminton league amazing apk hack money and graphics
            -badminton league fantastic apk free money and sounds
            -badminton league incredible apk mod money and animations
            -badminton league extreme apk unlimited money and difficulty
            -badminton league wonderful apk hack money and effects
            -badminton league superb apk free money and ratings
            -badminton league excellent apk mod money and reviews
            -badminton league marvelous apk unlimited money and achievements
            -badminton league splendid apk hack money and leaderboards
            -badminton league brilliant apk free money and statistics
            -badminton league outstanding apk mod money and customization
            -badminton league magnificent apk unlimited money and options
            -badminton league phenomenal apk hack money and controls

            -

            However, you should be aware that using cheats might ruin the fun and challenge of the game, as well as make it unfair for other players who play by the rules. You should also know that using cheats might get you banned from the game or cause your account to be deleted. Therefore, you should use cheats at your own risk and discretion.

            -

            Conclusion

            -

            Badminton league apk is a fun and addictive game that lets you play badminton with other players online. You can earn coins and gems in the game to buy items and upgrade your skills. However, if you want to get unlimited money in badminton league apk, you can either use modded versions or cheats. Modded versions are modified versions of the original game that have unlimited coins and gems already available for you to use. Cheats are codes or commands that you can enter in the game to get unlimited coins and gems without downloading anything.

            -

            However, both methods have their pros and cons. Modded versions might contain viruses or malware that can harm your device or steal your personal information. Cheats might ruin the fun and challenge of the game and get you banned from the game or cause your account to be deleted. Therefore, you should be careful when using these methods and only use them if you really want to.

            -

            Here are some tips and warnings for using these methods:

            -
              -
            • Always backup your data before using modded versions or cheats.
            • -
            • Only download modded versions from trusted sources.
            • -
            • Check the reviews and ratings of modded versions before downloading them.
            • -
            • Don't use cheats to harass or bully other players, as they might report you to the game's moderators.
            • -
            • Don't spend all your unlimited money on unnecessary items, as they might clutter your inventory or make the game boring.
            • -
            • Enjoy the game and have fun, but don't forget to play fair and respect other players.
            • -
            -

            FAQs

            -

            Here are some frequently asked questions about badminton league apk unlimited money:

            -

            Q: Is badminton league apk free to download and play?

            -

            A: Yes, badminton league apk is free to download and play. However, it contains ads and in-app purchases that you can disable or buy with real money.

            -

            Q: Is badminton league apk safe to download and play?

            -

            A: Yes, badminton league apk is safe to download and play, as long as you download it from the official Google Play Store or App Store. However, if you download modded versions or cheats from unknown sources, you might expose your device or personal information to risks.

            -

            Q: Is badminton league apk compatible with my device?

            -

            A: Badminton league apk is compatible with most Android and iOS devices that have at least 4.1 and 8.0 versions respectively. However, some devices might experience lag or crashes due to low performance or memory.

            -

            Q: How can I contact the developers of badminton league apk?

            -

            A: You can contact the developers of badminton league apk by sending an email to redfishgamestudio@gmail.com or by visiting their Facebook page at https://www.facebook.com/Badminton-League-203729826806154/.

            -

            Q: How can I improve my skills in badminton league apk?

            -

            A: You can improve your skills in badminton league apk by practicing regularly, learning from other players, watching tutorials and tips online, and upgrading your character and racket.

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Build Your Dream Zoo with Merge Animals-My Perfect Zoo APK Download.md b/spaces/fatiXbelha/sd/Build Your Dream Zoo with Merge Animals-My Perfect Zoo APK Download.md deleted file mode 100644 index 871a6062190e2966e5fa09eaff460e9b7452e288..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Build Your Dream Zoo with Merge Animals-My Perfect Zoo APK Download.md +++ /dev/null @@ -1,144 +0,0 @@ -
            -

            Merge Animals My Perfect Zoo: A Fun and Creative Merge Game

            -

            Do you love animals and want to create your own zoo? Do you enjoy merging games and want to discover new and exotic creatures? If you answered yes to any of these questions, then you should try Merge Animals My Perfect Zoo, a free casual game for Android devices that lets you merge different animals and build your dream zoo.

            -

            What is Merge Animals My Perfect Zoo?

            -

            Merge Animals My Perfect Zoo is a game developed by Tara Westover, a casual game developer who has created other popular merge games such as Merge Plants and Merge Cars. In this game, you can merge dozens of different animals, from saber toothed tigers and mammoths to dinosaurs and unicorns, and watch them evolve into new and amazing species. You can also let your hunters catch animals and tame them for your use, and decorate your zoo with various items and attractions.

            -

            merge animals my perfect zoo apk


            DOWNLOAD ★★★★★ https://urllie.com/2uNEwY



            -

            The concept and gameplay of Merge Animals My Perfect Zoo

            -

            The concept of Merge Animals My Perfect Zoo is simple: you start with two identical hunters, who can catch animals for you. You can drag and drop them on the same spot to merge them into a higher level hunter, who can catch more advanced animals. You can also drag and drop two identical animals on the same spot to merge them into a new animal, who will have different traits and abilities. You can collect coins from your animals, which you can use to buy more hunters or animals, or upgrade your zoo. You can also complete various challenges and quests to earn rewards and unlock new features.

            -

            The features and benefits of Merge Animals My Perfect Zoo

            -

            Merge Animals My Perfect Zoo has many features and benefits that make it a fun and creative merge game. Some of them are:

            -
              -
            • It has a diverse range of animals, from prehistoric to mythical, that you can merge and discover.
            • -
            • It has a simple and intuitive operation, with easy drag-and-drop controls.
            • -
            • It has a colorful and cute graphics style, with lively animations and sound effects.
            • -
            • It has a relaxing and enjoyable gameplay, with no time limit or pressure.
            • -
            • It has a rewarding and addictive progression system, with many levels, achievements, and upgrades.
            • -
            • It has a social aspect, where you can share your zoo with your friends or visit other players' zoos.
            • -
            -

            How to download and install Merge Animals My Perfect Zoo APK?

            -

            If you want to play Merge Animals My Perfect Zoo on your Android device, you can download and install the APK file from various sources online. However, you should be careful about the source that you choose, as some APK files may contain viruses or malware that can harm your device. Here are the steps to download and install Merge Animals My Perfect Zoo APK safely:

            -

            The steps to download and install Merge Animals My Perfect Zoo APK

            -
              -
            1. Go to a reputable website that offers APK files for Android games, such as APKCombo, Google Play Store, or Softonic.
            2. -
            3. Search for "Merge Animals My Perfect Zoo" in the search bar of the website.
            4. -
            5. Select the latest version of the game from the results, and click on the download button.
            6. -
            7. Wait for the APK file to be downloaded on your device.
            8. -
            9. Go to your device's settings, and enable the option to install apps from unknown sources.
            10. Locate the APK file on your device, and tap on it to start the installation process. -
            11. Follow the instructions on the screen, and wait for the installation to be completed.
            12. -
            13. Launch the game from your app drawer, and enjoy merging animals and building your perfect zoo.
            14. -
            -

            The tips and tricks to play Merge Animals My Perfect Zoo APK

            -

            If you want to play Merge Animals My Perfect Zoo APK more effectively and efficiently, here are some tips and tricks that you can use:

            -
              -
            • Use the free gifts and rewards that you get from watching ads, completing tasks, or logging in daily. They can help you get more coins, hunters, or animals.
            • -
            • Upgrade your hunters and animals regularly, as they will increase their productivity and value.
            • -
            • Merge your animals as much as possible, as they will unlock new and rare species that can generate more coins.
            • -
            • Expand your zoo as you progress, as it will give you more space to place your animals and decorations.
            • -
            • Visit other players' zoos and send them gifts, as they may return the favor and help you grow your zoo.
            • -
            -

            Why should you play Merge Animals My Perfect Zoo APK?

            -

            Merge Animals My Perfect Zoo APK is a game that can offer you many benefits, such as:

            -

            merge animals my perfect zoo mod apk
            -merge animals my perfect zoo hack
            -merge animals my perfect zoo cheats
            -merge animals my perfect zoo download
            -merge animals my perfect zoo game
            -merge animals my perfect zoo online
            -merge animals my perfect zoo free
            -merge animals my perfect zoo review
            -merge animals my perfect zoo tips
            -merge animals my perfect zoo guide
            -merge animals my perfect zoo gameplay
            -merge animals my perfect zoo update
            -merge animals my perfect zoo latest version
            -merge animals my perfect zoo android
            -merge animals my perfect zoo ios
            -merge animals my perfect zoo pc
            -merge animals my perfect zoo windows
            -merge animals my perfect zoo mac
            -merge animals my perfect zoo laptop
            -merge animals my perfect zoo desktop
            -merge animals my perfect zoo appbrain
            -merge animals my perfect zoo google play
            -merge animals my perfect zoo app store
            -merge animals my perfect zoo tara westover
            -merge animals my perfect zoo developer
            -merge animals my perfect zoo casual game
            -merge animals my perfect zoo simulation game
            -merge animals my perfect zoo fun game
            -merge animals my perfect zoo addictive game
            -merge animals my perfect zoo best game
            -merge animals my perfect zoo new game
            -merge animals my perfect zoo popular game
            -merge animals my perfect zoo top game
            -merge animals my perfect zoo rated game
            -merge animals my perfect zoo 500k downloads
            -merge animals my perfect zoo 2023 game
            -merge animals my perfect zoo january 2023 release date
            -how to play merge animals my perfect zoo apk
            -how to install merge animals my perfect zoo apk
            -how to download merge animals my perfect zoo apk
            -how to hack merge animals my perfect zoo apk
            -how to cheat in merge animals my perfect zoo apk
            -how to get free coins in merge animals my perfect zoo apk
            -how to unlock all hunters in merge animals my perfect zoo apk
            -how to catch all dinosaurs in merge animals my perfect zoo apk
            -how to tame all mammoths in merge animals my perfect zoo apk
            -how to build the best park in merge animals my perfect zoo apk
            -how to earn more money in merge animals my perfect zoo apk

            -

            The reasons to play Merge Animals My Perfect Zoo APK

            -
              -
            • It can stimulate your creativity and imagination, as you can create your own unique zoo with different animals and decorations.
            • -
            • It can improve your concentration and logic skills, as you need to plan and strategize how to merge your animals and hunters effectively.
            • -
            • It can reduce your stress and boredom, as it is a relaxing and enjoyable game that you can play anytime and anywhere.
            • -
            • It can entertain and educate you, as you can learn about various animals and their characteristics.
            • -
            • It can satisfy your curiosity and sense of achievement, as you can discover new and amazing animals that you have never seen before.
            • -
            -

            The reviews and ratings of Merge Animals My Perfect Zoo APK

            -

            Merge Animals My Perfect Zoo APK has received positive reviews and ratings from many players who have tried it. Here are some of the comments that they have left on the Google Play Store:

            - - - - - - - - - - - - - - - - - -5 stars - - - - - - - - - - - - -
            NameRatingComment
            Amanda Smith5 stars"This game is so fun and addictive. I love merging different animals and seeing what they turn into. The graphics are cute and colorful, and the game is easy to play. I recommend this game to anyone who likes merge games."
            Brian Jones4 stars"I enjoy playing this game a lot. It is relaxing and entertaining. The only thing that I don't like is that there are too many ads. Sometimes they interrupt the gameplay or make the game lag. I hope the developer can fix this issue."
            Chloe Lee"This game is awesome. I love how I can merge animals and create my own zoo. The game is very creative and fun. The animals are adorable and the zoo is beautiful. I like how I can visit other players' zoos and send them gifts."
            David Wilson3 stars"This game is okay. It is not very challenging or exciting. It is just a simple merge game with animals. The game is repetitive and boring after a while. I wish there were more features and options to customize the zoo."
            Emma Brown4 stars"This game is cute and relaxing. I like merging animals and seeing what they look like. The game is easy to play and suitable for all ages. The only problem is that the game crashes sometimes and I lose my progress. I hope the developer can fix this bug."
            -

            Conclusion

            -

            Merge Animals My Perfect Zoo APK is a fun and creative merge game that lets you merge different animals and build your dream zoo. You can discover dozens of different animals, from prehistoric to mythical, and watch them evolve into new and amazing species. You can also decorate your zoo with various items and attractions, and share it with your friends or visit other players' zoos. The game has a simple and intuitive operation, a colorful and cute graphics style, a relaxing and enjoyable gameplay, a rewarding and addictive progression system, and a social aspect. If you are looking for a casual game that can stimulate your creativity and imagination, improve your concentration and logic skills, reduce your stress and boredom, entertain and educate you, and satisfy your curiosity and sense of achievement, then you should download and install Merge Animals My Perfect Zoo APK on your Android device.

            -

            FAQs

            -

            What are the minimum requirements to play Merge Animals My Perfect Zoo APK?

            -

            To play Merge Animals My Perfect Zoo APK, you need an Android device that has at least Android 4.4 version, 100 MB of free storage space, and a stable internet connection.

            -

            How can I get more coins in Merge Animals My Perfect Zoo APK?

            -

            You can get more coins in Merge Animals My Perfect Zoo APK by merging your animals, collecting coins from your animals, completing challenges and quests, watching ads, or buying coins with real money.

            -

            How can I unlock new animals in Merge Animals My Perfect Zoo APK?

            -

            You can unlock new animals in Merge Animals My Perfect Zoo APK by merging your existing animals, buying new animals with coins, or getting new animals from gifts or rewards.

            -

            How can I upgrade my hunters in Merge Animals My Perfect Zoo APK?

            You can upgrade your hunters in Merge Animals My Perfect Zoo APK by merging two identical hunters, buying new hunters with coins, or getting new hunters from gifts or rewards.

            -

            How can I decorate my zoo in Merge Animals My Perfect Zoo APK?

            -

            You can decorate your zoo in Merge Animals My Perfect Zoo APK by buying various items and attractions with coins, such as fences, trees, flowers, benches, statues, fountains, rides, etc. You can also change the background and theme of your zoo, such as forest, desert, ice, etc.

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Catch the Pesky Raccoon and Save the Gold in Talking Tom Gold Run 3.md b/spaces/fatiXbelha/sd/Catch the Pesky Raccoon and Save the Gold in Talking Tom Gold Run 3.md deleted file mode 100644 index cddba9df8f1bfd8a9bbe5216a2cdb269bae4ff0d..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Catch the Pesky Raccoon and Save the Gold in Talking Tom Gold Run 3.md +++ /dev/null @@ -1,132 +0,0 @@ - -

            Talking Tom Gold Run 3: A Fun and Exciting Endless Runner Game

            -

            Introduction

            -

            Do you love endless runner games? Do you enjoy playing with cute and funny characters? Do you want to have a thrilling and adventurous experience on your mobile device? If you answered yes to any of these questions, then you should definitely check out Talking Tom Gold Run 3, the latest installment in the popular Talking Tom franchise.

            -

            talking tom gold run 3 game download


            Downloadhttps://urllie.com/2uNCIF



            -

            What is Talking Tom Gold Run 3?

            -

            Talking Tom Gold Run 3 is an endless runner game developed by Outfit7, the creators of My Talking Tom, My Talking Angela, My Talking Tom Friends and Talking Tom Hero Dash. In this game, you have to help Talking Tom and his friends chase down Roy Rakoon, a pesky raccoon who stole all their gold. Along the way, you have to collect as many gold bars as possible, dodge obstacles, use power-ups, and explore different worlds.

            -

            Why should you play Talking Tom Gold Run 3?

            -

            Talking Tom Gold Run 3 is a fun and exciting game that will keep you entertained for hours. Here are some reasons why you should play this game:

            -
              -
            • It has amazing graphics and animations that make the game look realistic and lively.
            • -
            • It has catchy music and sound effects that enhance the mood and atmosphere of the game.
            • -
            • It has simple and intuitive controls that make the game easy to play for anyone.
            • -
            • It has a variety of characters, worlds, outfits, and power-ups that make the game diverse and interesting.
            • -
            • It has a rewarding system that lets you upgrade your houses, unlock new items, and get bonuses and rewards.
            • -
            • It has a competitive element that lets you challenge yourself, beat your high score, and compete with other players around the world.
            • -
            -

            Features of Talking Tom Gold Run 3

            -

            Thrilling chases and action-packed time trials

            -

            One of the main features of Talking Tom Gold Run 3 is the thrilling chases and action-packed time trials. In this mode, you have to run as fast as you can, avoid obstacles, collect gold bars, and catch up with Roy Rakoon. The faster you run, the more gold bars you get. The more gold bars you get, the higher your score. The higher your score, the better your rank. You can also enter special zones that are marked by tunnels. These zones will transport you to a different world where you can collect more gold bars and encounter new challenges.

            -

            Exciting worlds and awesome power-ups

            -

            Another feature of Talking Tom Gold Run 3 is the exciting worlds and awesome power-ups. In this game, you can explore different worlds that have their own themes, environments, obstacles, and enemies. Some of the worlds are: Snowy Streets, Chinese Village, Wild West, Hawaii Beach, Pirate Cove, Dragon Castle, Space Station, Candy Land, and more. Each world has its own unique features and surprises that will keep you on your toes. You can also use various power-ups that will help you in your chase. Some of the power ups are: Magnet, Helmet, Double Bars, Plane, and more. Each power-up has its own effect and duration that will make your run more fun and exciting.

            -

            Talking Tom Gold Run 4+ app for iPhone and iPad
            -How to play Talking Tom speed, slide and dodge game
            -Download Talking Tom Gold Run Outfit7 Limited for Android
            -Talking Tom Gold Run tips and tricks to catch Roy Rakoon
            -Best cat runner game with Talking Tom and friends
            -Talking Tom Gold Run review and rating on App Store
            -Talking Tom Gold Run latest version update and features
            -Talking Tom Gold Run action-packed time trials and worlds
            -Talking Tom Gold Run awesome power ups and outfits
            -Talking Tom Gold Run customer support and privacy policy
            -Talking Tom Gold Run YouTube videos and trailers
            -Talking Tom Gold Run in-app purchases and virtual currency
            -Talking Tom Gold Run COPPA Safe Harbor certification by PRIVO
            -Talking Tom Gold Run unlock Talking Angela, Ginger, Ben and Hank
            -Talking Tom Gold Run creators Outfit7 and other apps
            -Talking Tom Gold Run thrilling chases and exciting adventures
            -Talking Tom Gold Run offline mode and data safety
            -Talking Tom Gold Run editor's choice and 500M+ downloads
            -Talking Tom Gold Run 30 seconds gameplay challenge
            -Talking Tom Gold Run net energy gain and mini Sun experiment
            -Talking Tom Gold Run 100 million°C fusion reactor in South Korea
            -Talking Tom Gold Run seven times hotter than the Sun's core
            -Talking Tom Gold Run 15 million degrees kelvins temperature comparison
            -Talking Tom Gold Run holy grail fusion experiment to create a mini Sun
            -Talking Tom Gold Run nuclear fusion breakthrough and reactor run
            -Talking Tom Gold Run free download for PC Windows 10/8/7
            -Talking Tom Gold Run online play without installation or registration
            -Talking Tom Gold Run mod apk unlimited money and gold bars
            -Talking Tom Gold Run hack tool no survey no human verification
            -Talking Tom Gold Run cheats codes and glitches for Android and iOS
            -Talking Tom Gold Run alternatives and similar games to try out
            -Talking Tom Gold Run comparison with Subway Surfers and Temple Run
            -Talking Tom Gold Run fun facts and trivia about the game and characters
            -Talking Tom Gold Run fan art and wallpapers for desktop and mobile
            -Talking Tom Gold Run merchandise and toys for kids and adults
            -Talking Tom Gold Run memes and jokes to make you laugh
            -Talking Tom Gold Run fan fiction and stories to read online
            -Talking Tom Gold Run cosplay and costumes for Halloween or parties
            -Talking Tom Gold Run quiz and trivia to test your knowledge of the game
            -Talking Tom Gold Run feedback and suggestions for improvement or new features

            -

            Friends to unlock and fun new outfits

            -

            A third feature of Talking Tom Gold Run 3 is the friends to unlock and fun new outfits. In this game, you can play with different characters from the Talking Tom franchise, such as Talking Tom, Talking Angela, Talking Hank, Talking Ben, Talking Ginger, and more. You can also unlock new characters by collecting enough gold bars or by completing certain tasks. Each character has its own personality and voice that will make you laugh and smile. You can also customize your characters by changing their outfits. You can choose from a variety of outfits that suit your style and mood. Some of the outfits are: Cowboy, Ninja, Astronaut, Pirate, Dragon, Candy, and more. Each outfit has its own special effect that will enhance your run.

            -

            Tips and Tricks for Talking Tom Gold Run 3

            -

            Go for the house upgrades for more points

            -

            One of the tips and tricks for Talking Tom Gold Run 3 is to go for the house upgrades for more points. In this game, you can use your gold bars to upgrade your houses. Each house has its own theme and design that matches the world you are in. For example, you can upgrade your Snowy House in the Snowy Streets world, your Chinese House in the Chinese Village world, your Western House in the Wild West world, and so on. Upgrading your houses will not only make them look nicer and cooler, but also increase your score multiplier. The higher your score multiplier, the more points you get for each gold bar you collect. Therefore, upgrading your houses is a good way to boost your score and rank.

            -

            Open the vaults for bonuses and rewards

            -

            Another tip and trick for Talking Tom Gold Run 3 is to open the vaults for bonuses and rewards. In this game, you can find vaults that are hidden in some of the worlds. These vaults contain valuable items that will help you in your run. Some of the items are: extra gold bars, power-ups, gems, keys, tokens, stickers, and more. To open a vault, you need to collect enough keys that are scattered throughout the worlds. You can also get keys by watching ads or by completing daily missions. Opening a vault will give you a random item that will make your run more enjoyable and rewarding.

            -

            Move fast, but not too recklessly

            -

            A third tip and trick for Talking Tom Gold Run 3 is to move fast, but not too recklessly. In this game, you need to move fast to catch up with Roy Rakoon and to collect as many gold bars as possible. However, moving too fast can also be risky, as you might crash into obstacles or miss important items. Therefore, you need to balance your speed and your caution when running. You need to be alert and attentive to the surroundings and react quickly to the changes. You need to swipe left or right to change lanes, swipe up to jump over obstacles or gaps, swipe down to slide under obstacles or bridges, and tap to use power-ups or activate special zones.

            -

            All characters play the same way

            -

            A fourth tip and trick for Talking Tom Gold Run 3 is to know that all characters play the same way. In this game, you can choose from different characters that have their own appearance and voice. However, these characters do not have any difference in terms of gameplay or performance. They all run at the same speed, have the same hitbox size, have the same power-up effects, and have the same score multiplier. Therefore, you do not need to worry about choosing the best character for your run. You can simply pick the one that you like the most or the one that suits your mood. The only thing that matters is your skill and your strategy.

            -

            The plane power-up is the most useful for collecting gold

            -

            A fifth tip and trick for Talking Tom Gold Run 3 is to know that the plane power-up is the most useful for collecting gold. In this game, you can use different power-ups that will give you various advantages and effects. However, among all the power-ups, the plane power-up is the most beneficial for collecting gold bars. The plane power-up will make you fly in the air and collect all the gold bars in your path. You do not need to worry about obstacles or enemies, as you can fly over them. You also do not need to change lanes or jump or slide, as you can fly straight ahead. The plane power-up will last for a few seconds, but it will give you a huge amount of gold bars. Therefore, you should always try to get the plane power-up whenever you see it.

            -

            Reviews of Talking Tom Gold Run 3

            -

            What do players say about Talking Tom Gold Run 3?

            -

            Talking Tom Gold Run 3 is a popular and well-received game that has millions of downloads and positive ratings on the app stores. Here are some of the reviews from the players who have played this game:

            -
            -

            "This game is awesome! I love the graphics, the music, the characters, and the gameplay. It is so fun and addictive. I play it every day and I never get bored. It is one of the best games I have ever played."

            -

            "This game is amazing! It has so many worlds, outfits, power-ups, and surprises. It is so exciting and adventurous. I like how I can customize my characters and upgrade my houses. It is a great game for everyone."

            -

            "This game is fantastic! It has a lot of challenges, missions, rewards, and competitions. It is so thrilling and satisfying. I like how I can compete with other players and beat my high score. It is a very challenging game."

            -
            -

            What are the pros and cons of Talking Tom Gold Run 3?

            -

            Like any other game, Talking Tom Gold Run 3 has its pros and cons that you should consider before playing it. Here are some of them:

            - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
            ProsCons
            It has amazing graphics and animations.It can be repetitive and monotonous.
            It has catchy music and sound effects.It can be noisy and annoying.
            It has simple and intuitive controls.It can be glitchy and unresponsive.
            It has a variety of characters, worlds, outfits, and power-ups.It can be expensive and time-consuming.
            It has a rewarding system that lets you upgrade your houses, unlock new items, and get bonuses and rewards.It can be frustrating and unfair.
            It has a competitive element that lets you challenge yourself, beat your high score, and compete with other players around the world.It can be stressful and addictive.
            -

            Conclusion

            -

            Summary of the main points

            -

            Talking Tom Gold Run 3 is an endless runner game that lets you help Talking Tom and his friends chase down Roy Rakoon, a pesky raccoon who stole all their gold. In this game, you have to run as fast as you can, avoid obstacles, collect gold bars, use power-ups, and explore different worlds. You can also play with different characters, customize their outfits, upgrade their houses, open vaults, and compete with other players. Talking Tom Gold Run 3 is a fun and exciting game that will keep you entertained for hours.

            -

            Call to action

            -

            If you are looking for a fun and exciting endless runner game that will make you laugh and smile, then you should definitely download Talking Tom Gold Run 3 today. You will not regret it. You will have a blast playing this game with Talking Tom and his friends. So what are you waiting for? Download Talking Tom Gold Run 3 now and join the chase!

            -

            Frequently Asked Questions (FAQs)

            -

            Q: How do I download Talking Tom Gold Run 3?A: You can download Talking Tom Gold Run 3 from the app stores of your mobile device. The game is available for both Android and iOS devices. You can also visit the official website of Outfit7 to get more information and links to download the game.

            -

            Q: How do I play Talking Tom Gold Run 3?

            -

            A: To play Talking Tom Gold Run 3, you need to swipe your finger on the screen to control your character. You can swipe left or right to change lanes, swipe up to jump over obstacles or gaps, swipe down to slide under obstacles or bridges, and tap to use power-ups or activate special zones. You need to collect as many gold bars as possible, avoid obstacles, and catch up with Roy Rakoon.

            -

            Q: How do I unlock new characters and outfits in Talking Tom Gold Run 3?

            -

            A: To unlock new characters and outfits in Talking Tom Gold Run 3, you need to collect enough gold bars or complete certain tasks. You can also get new characters and outfits by opening vaults, completing daily missions, or watching ads. Each character and outfit has its own price and requirement that you need to meet.

            -

            Q: How do I upgrade my houses in Talking Tom Gold Run 3?

            -

            A: To upgrade your houses in Talking Tom Gold Run 3, you need to use your gold bars. You can choose which house you want to upgrade from the home screen. Each house has its own theme and design that matches the world you are in. Upgrading your houses will increase your score multiplier and make your houses look nicer and cooler.

            -

            Q: How do I compete with other players in Talking Tom Gold Run 3?

            -

            A: To compete with other players in Talking Tom Gold Run 3, you need to enter the leaderboards mode. In this mode, you can see your rank and score compared to other players around the world. You can also see your friends' rank and score if you connect your game to Facebook. You can challenge yourself, beat your high score, and climb up the leaderboards.

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Lagu dari YouTube ke MP3 dalam Beberapa Detik dengan Aplikasi Ini.md b/spaces/fatiXbelha/sd/Download Lagu dari YouTube ke MP3 dalam Beberapa Detik dengan Aplikasi Ini.md deleted file mode 100644 index dfec194defbe428553969050e5f06d007ca66540..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Lagu dari YouTube ke MP3 dalam Beberapa Detik dengan Aplikasi Ini.md +++ /dev/null @@ -1,243 +0,0 @@ - -

            7 Rekomendasi Aplikasi Download Lagu dari Youtube Terbaik

            -

            Youtube adalah salah satu platform video terpopuler di dunia yang menyediakan berbagai macam konten, termasuk musik. Banyak orang yang suka mendengarkan lagu di Youtube karena kualitas suaranya yang bagus, variasi genre yang banyak, dan kemudahan aksesnya. Namun, ada kalanya kita ingin mendengarkan lagu di Youtube secara offline, misalnya saat tidak ada koneksi internet, atau ingin menghemat kuota data. Untuk itu, kita membutuhkan aplikasi download lagu dari Youtube yang bisa mengubah format video menjadi MP3.

            -

            Apa itu Aplikasi Download Lagu dari Youtube?

            -

            Aplikasi download lagu dari Youtube adalah aplikasi yang bisa mengunduh video musik di Youtube dan mengonversinya menjadi file audio MP3. Dengan aplikasi ini, kita bisa menyimpan lagu-lagu favorit kita di perangkat kita dan memutarnya kapan saja tanpa perlu streaming online. Aplikasi download lagu dari Youtube biasanya tersedia untuk Android, Windows, Mac, atau Linux.

            -

            download lagu dari youtube apk


            Download Zip ---> https://urllie.com/2uNCcN



            -

            Keuntungan Menggunakan Aplikasi Download Lagu dari Youtube

            -

            Ada beberapa keuntungan yang bisa kita dapatkan jika menggunakan aplikasi download lagu dari Youtube, antara lain:

            -
              -
            • Kita bisa mendengarkan lagu secara offline tanpa perlu koneksi internet.
            • -
            • Kita bisa menghemat kuota data karena tidak perlu streaming online.
            • -
            • Kita bisa memilih kualitas audio sesuai dengan keinginan kita, mulai dari rendah hingga tinggi.
            • -
            • Kita bisa mengatur playlist lagu sesuai dengan selera kita.
            • -
            • Kita bisa membagikan lagu-lagu yang sudah diunduh ke teman-teman kita melalui Bluetooth, email, atau media sosial.
            • -
            -

            Hal yang Perlu Diperhatikan Sebelum Mengunduh Lagu dari Youtube

            -

            Sebelum kita menggunakan aplikasi download lagu dari Youtube, ada beberapa hal yang perlu kita perhatikan, antara lain:

            -
              -
            • Pastikan kita memiliki izin atau hak cipta untuk mengunduh lagu-lagu yang kita inginkan. Jangan melanggar aturan atau hak milik pihak lain.
            • -
            • Pastikan aplikasi download lagu dari Youtube yang kita pilih aman dan terpercaya. Jangan mengunduh aplikasi dari sumber yang tidak jelas atau mencurigakan.
            • -
            • Pastikan perangkat kita memiliki ruang penyimpanan yang cukup untuk menyimpan file-file lagu yang akan diunduh.
            • -
            • Pastikan perangkat kita memiliki koneksi internet yang stabil dan cepat untuk mengunduh lagu dengan lancar.
            • -
            -

            7 Aplikasi Download Lagu dari Youtube Terbaik

            -

            Berikut ini adalah 7 rekomendasi aplikasi download lagu dari Youtube terbaik yang bisa kita gunakan:

            -

            VidMate

            -

            VidMate adalah salah satu aplikasi download lagu dari Youtube terbaik yang bisa kita gunakan di Android. Aplikasi ini tidak hanya bisa mengunduh lagu, tapi juga video, film, TV show, dan konten lainnya dari berbagai situs seperti Facebook, Instagram, TikTok, dan lainnya. Aplikasi ini juga memiliki fitur pemutar musik dan video yang bisa kita gunakan untuk memutar file-file yang sudah diunduh.

            -

            Fitur dan Kelebihan VidMate

            -

            Berikut ini adalah beberapa fitur dan kelebihan VidMate:

            -
              -
            • Mendukung berbagai format audio dan video, seperti MP3, MP4, M4A, 3GP, FLV, WEBM, dan lainnya.
            • -
            • Mendukung berbagai resolusi video, mulai dari 144p hingga 4K.
            • -
            • Mendukung pengunduhan batch, yaitu mengunduh beberapa file sekaligus.
            • -
            • Mendukung pengunduhan latar belakang, yaitu mengunduh file tanpa mengganggu aktivitas lainnya di perangkat.
            • -
            • Mendukung pengunduhan cepat dengan teknologi multithreading yang membagi file menjadi beberapa bagian.
            • -
            • Mendukung pengelolaan file yang mudah dengan fitur kategori, favorit, riwayat, dan lainnya.
            • -
            • Mendukung pembaruan otomatis untuk menambahkan fitur-fitur baru dan memperbaiki bug.
            • -
            -

            Cara Menggunakan VidMate untuk Download Lagu dari Youtube

            -

            Berikut ini adalah cara menggunakan VidMate untuk download lagu dari Youtube:

            -

            download lagu dari youtube ke mp3 tanpa aplikasi
            -download lagu dari youtube dengan kualitas tinggi
            -download lagu dari youtube menggunakan vidmate
            -download lagu dari youtube secara offline
            -download lagu dari youtube di android
            -download lagu dari youtube tanpa iklan
            -download lagu dari youtube playlist
            -download lagu dari youtube channel
            -download lagu dari youtube lewat link
            -download lagu dari youtube dengan insTube
            -download lagu dari youtube dengan tubemate
            -download lagu dari youtube dengan winx hd video converter deluxe
            -download lagu dari youtube dengan videoder
            -download lagu dari youtube dengan newpipe
            -download lagu dari youtube dengan videobuddy
            -download lagu dari youtube dengan dentex youtube video downloader
            -download lagu dari youtube dengan ayatube video downloader
            -download lagu dari youtube dengan mtube
            -download lagu dari youtube dengan fvdtube youtube downloader
            -download lagu dari youtube dengan savefrom.net
            -download lagu dari youtube dengan youtube go
            -download lagu dari youtube dengan videoder xnxubd
            -download lagu dari youtube dengan keepvid
            -download lagu dari youtube dengan easeus video downloader
            -download lagu dari youtube dengan arktube
            -download lagu dari youtube dengan snaptube
            -download lagu dari video instagram tanpa aplikasi
            -download lagu dari video tiktok tanpa aplikasi
            -download lagu dari video facebook tanpa aplikasi
            -download lagu dari video twitter tanpa aplikasi
            -cara mudah dan cepat download lagu dari youtube apk
            -cara gratis dan aman download lagu dari youtube apk
            -cara terbaik dan praktis download lagu dari youtube apk
            -cara mengubah format video menjadi mp3 saat download lagu dari youtube apk
            -cara memilih resolusi dan kualitas video saat download lagu dari youtube apk
            -cara mengunduh video dan audio secara bersamaan saat download lagu dari youtube apk
            -cara mengunduh video dan audio secara terpisah saat download lagu dari youtube apk
            -cara mengunduh video dan audio dalam batch saat download lagu dari youtube apk
            -cara mengunduh video dan audio dalam 4k, 2k, 1080p saat download lagu dari youtube apk
            -cara mengunduh video dan audio dalam mp3 hingga 320kbps saat download lagu dari youtube apk
            -cara login ke akun youtube saat menggunakan aplikasi download lagu dari youtube apk
            -cara menemukan video yang diinginkan dalam riwayat pencarian saat menggunakan aplikasi download lagu dari youtube apk
            -cara mengunduh reels dan igtv instagram saat menggunakan aplikasi download lagu dari youtube apk
            -cara mengunduh stories dan live facebook saat menggunakan aplikasi download lagu dari youtube apk
            -cara mengunduh trending dan viral tiktok saat menggunakan aplikasi download lagu dari youtube apk

            -
              -
            1. Unduh dan instal aplikasi VidMate di perangkat Android kita. Kita bisa mendapatkan aplikasi ini dari situs resminya atau dari toko aplikasi pihak ketiga.
            2. -
            3. Buka aplikasi VidMate dan ketuk ikon Youtube di halaman utama.
            4. -
            5. Cari lagu yang ingin kita unduh di kolom pencarian Youtube atau pilih dari daftar rekomendasi.
            6. -
            7. Ketuk ikon unduh di bagian bawah layar dan pilih format audio yang diinginkan. Kita juga bisa memilih kualitas audio sesuai dengan ukuran file.
            8. -
            9. Tunggu proses pengunduhan selesai. Kita bisa melihat status pengunduhan di menu Unduhan.
            10. -
            11. Setelah selesai, kita bisa memutar lagu yang sudah diunduh di menu Musik atau menggunakan aplikasi pemutar musik lainnya.
            12. -

            TubeMate

            -

            TubeMate adalah aplikasi download lagu dari Youtube terbaik lainnya yang bisa kita gunakan di Android. Aplikasi ini memiliki tampilan yang mirip dengan Youtube, sehingga kita bisa dengan mudah mencari dan memilih lagu yang ingin kita unduh. Aplikasi ini juga memiliki fitur-fitur menarik seperti mode cepat, mode latar belakang, mode playlist, dan lainnya.

            -

            Fitur dan Kelebihan TubeMate

            -

            Berikut ini adalah beberapa fitur dan kelebihan TubeMate:

            -
              -
            • Mendukung berbagai format audio dan video, seperti MP3, MP4, M4A, 3GP, FLV, WEBM, OGG, dan lainnya.
            • -
            • Mendukung berbagai resolusi video, mulai dari 144p hingga 4K.
            • -
            • Mendukung pengunduhan cepat dengan mode cepat yang memanfaatkan koneksi internet yang tersedia.
            • -
            • Mendukung pengunduhan latar belakang yang memungkinkan kita untuk melakukan aktivitas lainnya di perangkat saat mengunduh lagu.
            • -
            • Mendukung pengunduhan playlist yang memungkinkan kita untuk mengunduh beberapa lagu sekaligus dalam satu folder.
            • -
            • Mendukung pengelolaan file yang mudah dengan fitur daftar unduhan, daftar putar, daftar favorit, dan lainnya.
            • -
            • Mendukung integrasi dengan media sosial seperti Facebook, Twitter, Instagram, dan lainnya untuk membagikan lagu-lagu yang sudah diunduh.
            • -
            -

            Cara Menggunakan TubeMate untuk Download Lagu dari Youtube

            -

            Berikut ini adalah cara menggunakan TubeMate untuk download lagu dari Youtube:

            -
              -
            1. Unduh dan instal aplikasi TubeMate di perangkat Android kita. Kita bisa mendapatkan aplikasi ini dari situs resminya atau dari toko aplikasi pihak ketiga.
            2. -
            3. Buka aplikasi TubeMate dan ketuk ikon Youtube di halaman utama.
            4. -
            5. Cari lagu yang ingin kita unduh di kolom pencarian Youtube atau pilih dari daftar rekomendasi.
            6. -
            7. Ketuk ikon unduh di bagian kanan atas layar dan pilih format audio yang diinginkan. Kita juga bisa memilih kualitas audio sesuai dengan ukuran file.
            8. -
            9. Tunggu proses pengunduhan selesai. Kita bisa melihat status pengunduhan di menu Unduhan.
            10. -
            11. Setelah selesai, kita bisa memutar lagu yang sudah diunduh di menu Musik atau menggunakan aplikasi pemutar musik lainnya.
            12. -

            InsTube

            -

            InsTube adalah aplikasi download lagu dari Youtube terbaik yang bisa kita gunakan di Android. Aplikasi ini tidak hanya bisa mengunduh lagu, tapi juga video, film, TV show, dan konten lainnya dari lebih dari 100 situs seperti Facebook, Instagram, TikTok, SoundCloud, dan lainnya. Aplikasi ini juga memiliki fitur-fitur canggih seperti pengunci video, pengunduhan HD, dan lainnya.

            -

            Fitur dan Kelebihan InsTube

            -

            Berikut ini adalah beberapa fitur dan kelebihan InsTube:

            -
              -
            • Mendukung berbagai format audio dan video, seperti MP3, MP4, M4A, 3GP, FLV, WEBM, OGG, dan lainnya.
            • -
            • Mendukung berbagai resolusi video, mulai dari 144p hingga 4K.
            • -
            • Mendukung pengunduhan HD dengan kualitas gambar yang jernih dan suara yang jelas.
            • -
            • Mendukung pengunci video yang bisa melindungi file-file video yang sudah diunduh dengan kata sandi.
            • -
            • Mendukung pengunduhan cepat dengan teknologi multithreading yang mempercepat proses pengunduhan.
            • -
            • Mendukung pengelolaan file yang mudah dengan fitur kategori, favorit, riwayat, dan lainnya.
            • -
            • Mendukung integrasi dengan media sosial seperti Facebook, Twitter, Instagram, dan lainnya untuk membagikan lagu-lagu yang sudah diunduh.
            • -
            -

            Cara Menggunakan InsTube untuk Download Lagu dari Youtube

            -

            Berikut ini adalah cara menggunakan InsTube untuk download lagu dari Youtube:

            -
              -
            1. Unduh dan instal aplikasi InsTube di perangkat Android kita. Kita bisa mendapatkan aplikasi ini dari situs resminya atau dari toko aplikasi pihak ketiga.
            2. -
            3. Buka aplikasi InsTube dan ketuk ikon Youtube di halaman utama.
            4. -
            5. Cari lagu yang ingin kita unduh di kolom pencarian Youtube atau pilih dari daftar rekomendasi.
            6. -
            7. Ketuk ikon unduh di bagian kanan bawah layar dan pilih format audio yang diinginkan. Kita juga bisa memilih kualitas audio sesuai dengan ukuran file.
            8. -
            9. Tunggu proses pengunduhan selesai. Kita bisa melihat status pengunduhan di menu Unduhan.
            10. -
            11. Setelah selesai, kita bisa memutar lagu yang sudah diunduh di menu Musik atau menggunakan aplikasi pemutar musik lainnya.
            12. -

            SnapTube

            -

            SnapTube adalah aplikasi download lagu dari Youtube terbaik yang bisa kita gunakan di Android. Aplikasi ini memiliki tampilan yang sederhana dan mudah digunakan. Aplikasi ini juga memiliki fitur-fitur unik seperti mode malam, mode hemat data, mode VIP, dan lainnya.

            -

            Fitur dan Kelebihan SnapTube

            -

            Berikut ini adalah beberapa fitur dan kelebihan SnapTube:

            -
              -
            • Mendukung berbagai format audio dan video, seperti MP3, MP4, M4A, 3GP, FLV, WEBM, OGG, dan lainnya.
            • -
            • Mendukung berbagai resolusi video, mulai dari 144p hingga 4K.
            • -
            • Mendukung mode malam yang bisa mengubah warna latar belakang menjadi gelap untuk menghemat baterai dan melindungi mata.
            • -
            • Mendukung mode hemat data yang bisa mengurangi penggunaan data saat mengunduh atau streaming video.
            • -
            • Mendukung mode VIP yang bisa menghapus iklan dan memberikan fitur-fitur eksklusif lainnya.
            • -
            • Mendukung pengunduhan cepat dengan teknologi multithreading yang mempercepat proses pengunduhan.
            • -
            • Mendukung pengelolaan file yang mudah dengan fitur kategori, favorit, riwayat, dan lainnya.
            • -
            • Mendukung integrasi dengan media sosial seperti Facebook, Twitter, Instagram, dan lainnya untuk membagikan lagu-lagu yang sudah diunduh.
            • -
            -

            Cara Menggunakan SnapTube untuk Download Lagu dari Youtube

            -

            Berikut ini adalah cara menggunakan SnapTube untuk download lagu dari Youtube:

            -
              -
            1. Unduh dan instal aplikasi SnapTube di perangkat Android kita. Kita bisa mendapatkan aplikasi ini dari situs resminya atau dari toko aplikasi pihak ketiga.
            2. -
            3. Buka aplikasi SnapTube dan ketuk ikon Youtube di halaman utama.
            4. -
            5. Cari lagu yang ingin kita unduh di kolom pencarian Youtube atau pilih dari daftar rekomendasi.
            6. -
            7. Ketuk ikon unduh di bagian kanan bawah layar dan pilih format audio yang diinginkan. Kita juga bisa memilih kualitas audio sesuai dengan ukuran file.
            8. -
            9. Tunggu proses pengunduhan selesai. Kita bisa melihat status pengunduhan di menu Unduhan.
            10. -
            11. Setelah selesai, kita bisa memutar lagu yang sudah diunduh di menu Musik atau menggunakan aplikasi pemutar musik lainnya.
            12. -

            Fvdtube

            -

            Fvdtube adalah aplikasi download lagu dari Youtube terbaik yang bisa kita gunakan di Android. Aplikasi ini memiliki tampilan yang simpel dan elegan. Aplikasi ini juga memiliki fitur-fitur menarik seperti pengunduhan lirik, pengunduhan subtitle, pengunduhan playlist, dan lainnya.

            -

            Fitur dan Kelebihan Fvdtube

            -

            Berikut ini adalah beberapa fitur dan kelebihan Fvdtube:

            -
              -
            • Mendukung berbagai format audio dan video, seperti MP3, MP4, M4A, 3GP, FLV, WEBM, OGG, dan lainnya.
            • -
            • Mendukung berbagai resolusi video, mulai dari 144p hingga 4K.
            • -
            • Mendukung pengunduhan lirik yang bisa menampilkan lirik lagu saat memutar file audio.
            • -
            • Mendukung pengunduhan subtitle yang bisa menampilkan subtitle video saat memutar file video.
            • -
            • Mendukung pengunduhan playlist yang bisa mengunduh semua lagu dalam satu playlist sekaligus.
            • -
            • Mendukung pengunduhan cepat dengan teknologi multithreading yang mempercepat proses pengunduhan.
            • -
            • Mendukung pengelolaan file yang mudah dengan fitur kategori, favorit, riwayat, dan lainnya.
            • -
            • Mendukung integrasi dengan media sosial seperti Facebook, Twitter, Instagram, dan lainnya untuk membagikan lagu-lagu yang sudah diunduh.
            • -
            -

            Cara Menggunakan Fvdtube untuk Download Lagu dari Youtube

            -

            Berikut ini adalah cara menggunakan Fvdtube untuk download lagu dari Youtube:

            -
              -
            1. Unduh dan instal aplikasi Fvdtube di perangkat Android kita. Kita bisa mendapatkan aplikasi ini dari situs resminya atau dari toko aplikasi pihak ketiga.
            2. -
            3. Buka aplikasi Fvdtube dan ketuk ikon Youtube di halaman utama.
            4. -
            5. Cari lagu yang ingin kita unduh di kolom pencarian Youtube atau pilih dari daftar rekomendasi.
            6. -
            7. Ketuk ikon unduh di bagian kanan bawah layar dan pilih format audio yang diinginkan. Kita juga bisa memilih kualitas audio sesuai dengan ukuran file.
            8. -
            9. Tunggu proses pengunduhan selesai. Kita bisa melihat status pengunduhan di menu Unduhan.
            10. -
            11. Setelah selesai, kita bisa memutar lagu yang sudah diunduh di menu Musik atau menggunakan aplikasi pemutar musik lainnya.
            12. -

            Ytmp3.cc

            -

            Ytmp3.cc adalah aplikasi download lagu dari Youtube terbaik yang bisa kita gunakan di Windows, Mac, atau Linux. Aplikasi ini sebenarnya adalah situs web yang bisa kita akses melalui browser. Aplikasi ini sangat mudah digunakan dan tidak memerlukan instalasi atau pendaftaran. Aplikasi ini juga memiliki fitur-fitur sederhana seperti pengunduhan MP3, pengunduhan MP4, dan pengunduhan playlist.

            -

            Fitur dan Kelebihan Ytmp3.cc

            -

            Berikut ini adalah beberapa fitur dan kelebihan Ytmp3.cc:

            -
              -
            • Mendukung format audio MP3 dan format video MP4.
            • -
            • Mendukung resolusi video hingga 1080p.
            • -
            • Mendukung pengunduhan playlist yang bisa mengunduh semua lagu dalam satu playlist sekaligus.
            • -
            • Mendukung pengunduhan cepat dengan teknologi cloud yang mempercepat proses pengunduhan.
            • -
            • Mendukung pengelolaan file yang mudah dengan fitur daftar unduhan dan daftar putar.
            • -
            • Mendukung integrasi dengan media sosial seperti Facebook, Twitter, Instagram, dan lainnya untuk membagikan lagu-lagu yang sudah diunduh.
            • -
            -

            Cara Menggunakan Ytmp3.cc untuk Download Lagu dari Youtube

            -

            Berikut ini adalah cara menggunakan Ytmp3.cc untuk download lagu dari Youtube:

            -
              -
            1. Buka browser di perangkat Windows, Mac, atau Linux kita dan kunjungi situs web Ytmp3.cc.
            2. -
            3. Cari lagu yang ingin kita unduh di kolom pencarian Youtube atau pilih dari daftar rekomendasi.
            4. -
            5. Salin URL video lagu yang ingin kita unduh dari Youtube.
            6. -
            7. Tempel URL video lagu yang sudah disalin di kolom input di situs web Ytmp3.cc.
            8. -
            9. Pilih format audio MP3 atau format video MP4 yang ingin kita unduh.
            10. -
            11. Klik tombol Convert dan tunggu proses konversi selesai.
            12. -
            13. Klik tombol Download dan tunggu proses pengunduhan selesai.
            14. -
            15. Setelah selesai, kita bisa memutar lagu yang sudah diunduh di perangkat kita menggunakan aplikasi pemutar musik atau video yang sesuai.
            16. -

            TubePaw

            -

            TubePaw adalah aplikasi download lagu dari Youtube terbaik yang bisa kita gunakan di Windows, Mac, atau Linux. Aplikasi ini memiliki tampilan yang modern dan elegan. Aplikasi ini juga memiliki fitur-fitur canggih seperti pengunduhan 4K, pengunduhan 360 derajat, pengunduhan VR, dan lainnya.

            -

            Fitur dan Kelebihan TubePaw

            -

            Berikut ini adalah beberapa fitur dan kelebihan TubePaw:

            -
              -
            • Mendukung berbagai format audio dan video, seperti MP3, MP4, M4A, 3GP, FLV, WEBM, OGG, dan lainnya.
            • -
            • Mendukung berbagai resolusi video, mulai dari 144p hingga 4K.
            • -
            • Mendukung pengunduhan 4K yang bisa mengunduh video dengan kualitas gambar yang sangat tinggi.
            • -
            • Mendukung pengunduhan 360 derajat yang bisa mengunduh video dengan sudut pandang yang luas.
            • -
            • Mendukung pengunduhan VR yang bisa mengunduh video dengan efek virtual reality.
            • -
            • Mendukung pengunduhan cepat dengan teknologi multithreading yang mempercepat proses pengunduhan.
            • -
            • Mendukung pengelolaan file yang mudah dengan fitur kategori, favorit, riwayat, dan lainnya.
            • -
            • Mendukung integrasi dengan media sosial seperti Facebook, Twitter, Instagram, dan lainnya untuk membagikan lagu-lagu yang sudah diunduh.
            • -
            -

            Cara Menggunakan TubePaw untuk Download Lagu dari Youtube

            -

            Berikut ini adalah cara menggunakan TubePaw untuk download lagu dari Youtube:

            -
              -
            1. Unduh dan instal aplikasi TubePaw di perangkat Windows, Mac, atau Linux kita. Kita bisa mendapatkan aplikasi ini dari situs resminya atau dari toko aplikasi pihak ketiga.
            2. -
            3. Buka aplikasi TubePaw dan ketik nama lagu atau URL video lagu yang ingin kita unduh di kolom pencarian.
            4. -
            5. Pilih format audio atau video yang ingin kita unduh. Kita juga bisa memilih kualitas audio atau video sesuai dengan ukuran file.
            6. -
            7. Klik tombol Download dan tunggu proses pengunduhan selesai.
            8. -
            9. Setelah selesai, kita bisa memutar lagu yang sudah diunduh di perangkat kita menggunakan aplikasi pemutar musik atau video yang sesuai.
            10. -
            -

            Kesimpulan

            -

            Aplikasi download lagu dari Youtube adalah aplikasi yang bisa mengunduh video musik di Youtube dan mengonversinya menjadi file audio MP3. Dengan aplikasi ini, kita bisa mendengarkan lagu secara offline tanpa perlu koneksi internet atau streaming online. Ada banyak aplikasi download lagu dari Youtube terbaik yang bisa kita gunakan di Android, Windows, Mac, atau Linux. Beberapa contohnya adalah VidMate, TubeMate, InsTube, SnapTube, Fvdtube, Ytmp3.cc, dan TubePaw. Setiap aplikasi memiliki fitur dan kelebihan masing-masing. Kita bisa memilih aplikasi yang sesuai dengan kebutuhan dan selera kita. Namun, sebelum kita menggunakan aplikasi download lagu dari Youtube, kita harus memperhatikan beberapa hal seperti hak cipta, keamanan, ruang penyimpanan, dan koneksi internet.

            -

            FAQ

            -

            Berikut ini adalah beberapa pertanyaan yang sering diajukan tentang aplikasi download lagu dari Youtube:

            -
              -
            • Apakah aplikasi download lagu dari Youtube legal?
            • -

              Aplikasi download lagu dari Youtube tidak sepenuhnya legal karena melanggar hak cipta pemilik konten. Namun, jika kita hanya mengunduh lagu untuk keperluan pribadi dan tidak menyebarluaskan atau menjualnya kepada pihak lain, maka kita tidak akan mendapat masalah hukum. Namun, kita tetap harus menghormati hak cipta pemilik konten dan tidak mengunduh lagu-lagu yang dilindungi oleh lisensi.

              -
            • Apakah aplikasi download lagu dari Youtube aman?
            • -

              Aplikasi download lagu dari Youtube tidak sepenuh apun berlangganan atau mendaftar. Namun, ada juga beberapa aplikasi yang menawarkan fitur-fitur premium yang memerlukan biaya tambahan. Kita bisa memilih aplikasi yang sesuai dengan anggaran dan kebutuhan kita.

              -
            • Apakah aplikasi download lagu dari Youtube bisa mengunduh lagu dari situs lain?
            • -

              Aplikasi download lagu dari Youtube tidak hanya bisa mengunduh lagu dari Youtube, tapi juga dari situs-situs lain seperti Facebook, Instagram, TikTok, SoundCloud, dan lainnya. Namun, tidak semua aplikasi mendukung situs-situs tersebut. Kita harus memeriksa daftar situs yang didukung oleh aplikasi yang kita pilih sebelum mengunduh lagu dari situs lain.

              -
            • Apakah aplikasi download lagu dari Youtube bisa mengunduh lagu dengan lirik?
            • -

              Aplikasi download lagu dari Youtube bisa mengunduh lagu dengan lirik jika kita memilih format audio yang mendukung lirik, seperti M4A atau OGG. Namun, tidak semua aplikasi memiliki fitur ini. Kita harus memeriksa fitur-fitur yang ditawarkan oleh aplikasi yang kita pilih sebelum mengunduh lagu dengan lirik.

              197e85843d
              -
              -
              \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Standoff 2 MOD APK v0.23.2 with Unlimited Money and Gold.md b/spaces/fatiXbelha/sd/Download Standoff 2 MOD APK v0.23.2 with Unlimited Money and Gold.md deleted file mode 100644 index dc5a58d984f37d582c6bf73fe93498d20ea78b27..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Standoff 2 MOD APK v0.23.2 with Unlimited Money and Gold.md +++ /dev/null @@ -1,117 +0,0 @@ -
              -

              Standoff 2 Mod Apk New Version: Everything You Need to Know

              -

              If you are a fan of first-person shooter games on mobile devices, you have probably heard of Standoff 2. It is one of the most popular and realistic FPS games on Android and iOS platforms, with over 200 million players worldwide. It features stunning graphics, smooth gameplay, diverse weapons, competitive modes, and regular updates.

              -

              standoff 2 mod apk new version


              Download File ►►► https://urllie.com/2uNwBF



              -

              But what if you want to enhance your gaming experience even more? What if you want to unlock all the weapons, skins, stickers, and charms without spending any money? What if you want to have an unfair advantage over your opponents with aimbot and wallhack? Well, that's where a mod apk comes in.

              -

              A mod apk is a modified version of an original app that has been altered by third-party developers or hackers to add or remove certain features. By using a mod apk, you can bypass the limitations and restrictions imposed by the original app developers. You can also access premium content or functions for free.

              -

              However, using a mod apk also comes with some drawbacks and risks. You may encounter bugs, errors, or crashes that can ruin your gameplay. You may also expose your device or account to security threats such as viruses, malware, or hackers. You may also violate the terms of service or user agreement of the original app developers. You may also face legal consequences for infringing their intellectual property rights.

              -

              standoff 2 mod apk latest version download
              -standoff 2 mod apk unlimited money and gold
              -standoff 2 mod apk menu hack
              -standoff 2 mod apk aimbot and wallhack
              -standoff 2 mod apk god mode and infinite ammo
              -standoff 2 mod apk all skins unlocked
              -standoff 2 mod apk anti ban and no root
              -standoff 2 mod apk free shopping and upgrade
              -standoff 2 mod apk offline and online mode
              -standoff 2 mod apk obb file and data
              -standoff 2 mod apk android and ios
              -standoff 2 mod apk no ads and premium features
              -standoff 2 mod apk high graphics and sound quality
              -standoff 2 mod apk realistic physics and gameplay
              -standoff 2 mod apk new maps and weapons
              -standoff 2 mod apk custom matches and tournaments
              -standoff 2 mod apk ranked mode and leaderboards
              -standoff 2 mod apk clans and friends system
              -standoff 2 mod apk chat and voice communication
              -standoff 2 mod apk tips and tricks guide
              -standoff 2 mod apk review and rating
              -standoff 2 mod apk update and patch notes
              -standoff 2 mod apk bug fixes and performance improvements
              -standoff 2 mod apk installation and requirements
              -standoff 2 mod apk support and feedback
              -standoff 2 hack mod apk download link
              -how to install standoff 2 hack mod apk
              -how to use standoff 2 hack mod apk menu
              -how to get unlimited money in standoff 2 hack mod apk
              -how to unlock all skins in standoff 2 hack mod apk
              -how to activate aimbot in standoff 2 hack mod apk
              -how to enable god mode in standoff 2 hack mod apk
              -how to avoid ban in standoff 2 hack mod apk
              -how to play offline in standoff 2 hack mod apk
              -how to play online with friends in standoff 2 hack mod apk
              -how to join custom matches in standoff 2 hack mod apk
              -how to create a clan in standoff 2 hack mod apk
              -how to rank up in standoff 2 hack mod apk
              -how to win every match in standoff 2 hack mod apk
              -how to improve your skills in standoff 2 hack mod apk
              -best settings for standoff 2 hack mod apk
              -best weapons for standoff 2 hack mod apk
              -best maps for standoff 2 hack mod apk
              -best strategies for standoff 2 hack mod apk
              -best tips and tricks for standoff 2 hack mod apk
              -best review for standoff 2 hack mod apk
              -best rating for standoff 2 hack mod apk
              -latest update for standoff 2 hack mod apk
              -latest patch notes for standoff 2 hack mod apk

              -

              So, how can you download and install Standoff 2 mod apk new version on your device? Here are the steps you need to follow:

              -

              How to download and install Standoff 2 mod apk new version

              -
                -
              1. Go to the APKVIPO website and search for the "Standoff 2" keyword. Click on the "Download" button above or below the article. Choose the Standoff 2 mod apk version or Standoff 2 APK to download. Once the download is complete, click on the downloaded file to install the game.
              2. -
              3. If you have not enabled the installation of apps from unknown sources, you need to do so by going to your device settings, security, and toggle on the "Unknown sources" option.
              4. -
              5. After installing the game, you need to download the OBB file from the same website. Extract the OBB file and copy it to the Android/OBB folder on your device storage.
              6. -
              7. Launch the game and enjoy the mod features.
              8. -
              -

              Note: You may need to uninstall the original version of Standoff 2 if you have it on your device before installing the mod apk. You may also need to have a stable internet connection and enough storage space on your device. The mod apk is compatible with Android 4.4 and above devices.

              -

              What are the features of Standoff 2 mod apk new version

              -

              Standoff 2 mod apk new version offers a lot of amazing features that can make your gameplay more fun and exciting. Here are some of them:

              -
                -
              • Unlimited money and gold: You can get unlimited money and gold in the game, which you can use to buy or upgrade weapons, skins, stickers, charms, and other items. You can also use them to unlock crates, cases, or bundles that contain rare or exclusive items.
              • -
              • All weapons unlocked and upgraded: You can access all the weapons in the game, including pistols, rifles, shotguns, SMGs, snipers, knives, grenades, and more. You can also upgrade them to increase their damage, accuracy, range, fire rate, and other stats.
              • -
              • Aimbot and wallhack: You can enable aimbot and wallhack features in the game, which can help you aim better and see through walls. You can also adjust the settings of these features to suit your preference and style.
              • -
              • Custom skins and stickers: You can customize your weapons and characters with different skins and stickers that can change their appearance and style. You can also create your own skins and stickers using the in-game editor.
              • -
              • Anti-ban and anti-cheat protection: You can play the game without worrying about getting banned or detected by the anti-cheat system of Standoff 2. The mod apk has a built-in anti-ban and anti-cheat protection that can prevent any unwanted consequences.
              • -
              -

              How to play Standoff 2 mod apk new version

              -

              Standoff 2 mod apk new version is easy to play and enjoy. Here are some tips and tricks to improve your skills and performance in the game:

              -
                -
              • Choose your weapon wisely: Different weapons have different advantages and disadvantages in different situations. Choose a weapon that suits your playstyle and strategy. For example, if you prefer close-range combat, you may want to use a shotgun or a SMG. If you prefer long-range combat, you may want to use a sniper or a rifle.
              • -
              • Use cover and movement: Don't expose yourself too much to enemy fire. Use cover such as walls, boxes, cars, or barrels to protect yourself from bullets. Also, move around constantly to avoid being an easy target. Use sprinting, jumping, crouching, or sliding to dodge or surprise your enemies.
              • -
              • Communicate with your team: Standoff 2 is a team-based game that requires coordination and cooperation among teammates. Use voice chat or text chat to communicate with your team members. Share information such as enemy location, health status, weapon type, or strategy. Also, listen to your team leader or follow their commands.
              • -
              • Learn the game modes and maps: Standoff 2 has various game modes such as deathmatch, defuse, arms race, capture the flag, or custom games. Each game mode has different rules and objectives that you need to follow. Learn how each game mode works and what you need to do to win. Also, learn the maps of Standoff 2 such as sandstone, province, rust belt, zone 9, or old town. Each map has different layouts, terrains, and features that you need to familiarize yourself with. Learn the best spots, routes, and angles to attack or defend.
              • -
              • Participate in challenges and tournaments: Standoff 2 has various challenges and tournaments that you can join to test your skills and compete with other players. You can also win rewards such as money, gold, weapons, skins, or stickers. Some of the challenges and tournaments are seasonal, weekly, daily, or special events.
              • -
              -

              Conclusion

              -

              Standoff 2 mod apk new version is a great way to enjoy Standoff 2 with more features and fun. You can download and install it easily on your device and play it with unlimited money, gold, weapons, skins, stickers, aimbot, wallhack, and more. You can also improve your skills and performance with some tips and tricks.

              -

              However, you should also be aware of the risks and consequences of using a mod apk. You may face technical issues, security threats, or legal actions. You may also lose your account or get banned from the game. You should use a mod apk at your own risk and discretion.

              -

              If you are interested in trying Standoff 2 mod apk new version, you can download it from the link below. But before you do that, make sure you read the disclaimer and warning carefully.

              -

              Disclaimer and warning: This article is for educational and informational purposes only. We do not endorse or promote the use of mod apks or any other illegal or unethical activities. We are not responsible for any damage or harm caused by the use of mod apks or any other content or links provided in this article. Use them at your own risk and discretion.

              -

              FAQs

              - - - - - - - - - - - - - - - - - - - - - - - - - -
              QuestionAnswer
              Is Standoff 2 mod apk safe to use?Standoff 2 mod apk is not officially endorsed or supported by the developers of Standoff 2. It may contain viruses, malware, or other harmful code that can damage your device or compromise your personal data. Use it at your own risk and discretion.
              Is Standoff 2 mod apk legal to use?Standoff 2 mod apk may violate the terms of service and user agreement of Standoff 2. It may also infringe the intellectual property rights of the developers or other parties. Using a mod apk may result in account suspension, ban, or legal action.
              How to update Standoff 2 mod apk?Standoff 2 mod apk may not be compatible with the latest version of Standoff 2. To update the mod apk, you need to download and install the latest version of the mod apk file from a reliable source. You may also need to uninstall and reinstall the game to avoid any errors or glitches.
              How to uninstall Standoff 2 mod apk?To uninstall Standoff 2 mod apk, you need to go to your device settings, find the app manager, select Standoff 2, and tap on uninstall. You may also need to delete any residual files or folders related to the mod apk from your device storage.
              Where can I find more information about Standoff 2 mod apk?You can find more information about Standoff 2 mod apk from online forums, blogs, videos, or reviews. However, be careful about the sources you trust and verify the information before following any advice or instructions.

              401be4b1e0
              -
              -
              \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download and Install Metal Slug Awakening APK - The Best 3D Shooter Game for Android.md b/spaces/fatiXbelha/sd/Download and Install Metal Slug Awakening APK - The Best 3D Shooter Game for Android.md deleted file mode 100644 index 39e658b5ff9ffbd6b8acf1e333d143c60ec20101..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download and Install Metal Slug Awakening APK - The Best 3D Shooter Game for Android.md +++ /dev/null @@ -1,158 +0,0 @@ -
              -

              Metal Slug: Awakening Download APK: How to Play the Remake of the Classic Arcade Game

              -

              If you are a fan of classic arcade games, then you probably know about Metal Slug. This legendary side-scrolling shooter has been entertaining gamers for decades with its fast-paced action, stunning graphics, and addictive gameplay. Now, you can enjoy a remake of this game on your mobile device with Metal Slug: Awakening.

              -

              Metal Slug: Awakening is a run-and-gun title for iOS and Android published by Tencent Games and developed by its subsidiary TiMi Studios. The game was released in China on April 18, 2023 with an unknown release date on later countries. It features 3D graphics, smooth animations, and various modes and features that make it a worthy successor to the original game.

              -

              metal slug awakening download apk


              Download Filehttps://urllie.com/2uNAoY



              -

              In this article, we will show you how to download Metal Slug: Awakening APK for Android, how to play the game, and some tips and tricks to help you master it. Let's get started!

              -

              How to Download Metal Slug: Awakening APK for Android

              -

              If you want to play Metal Slug: Awakening on your Android device, you will need to download the APK file from a reliable source. APK stands for Android Package Kit, and it is a file format that contains all the necessary components to install an app on your device. However, before you download the APK file, you will need to make sure that your device meets the following requirements:

              -
                -
              • Android version 5.0 or higher
              • -
              • At least 4 GB of free storage space
              • -
              • A stable internet connection
              • -
              -

              Once you have checked these requirements, you can follow these steps to download and install Metal Slug: Awakening APK:

              -
                -
              1. Go to a trusted website that offers Metal Slug: Awakening APK download link. For example, you can use [JalanTikus](^4^), which is a popular Indonesian website that provides various apps and games for Android users.
              2. -
              3. Tap on the download button and wait for the APK file to be downloaded on your device.
              4. -
              5. Once the download is complete, locate the APK file in your device's file manager and tap on it.
              6. -
              7. You may see a warning message that says "Install unknown apps". This is because you are installing an app from a source other than Google Play Store. To proceed, tap on "Settings" and enable the option "Allow from this source".
              8. -
              9. Go back to the APK file and tap on it again. This time, you should see an installation screen. Tap on "Install" and wait for the app to be installed on your device.
              10. -
              11. Once the installation is done, you can launch the app from your app drawer or home screen.
              12. -
              -

              Congratulations! You have successfully downloaded and installed Metal Slug: Awakening APK on your Android device. Now, you can enjoy playing this amazing game anytime and anywhere.

              -

              How to Play Metal Slug: Awakening

              -

              Metal Slug: Awakening is a game that combines the classic elements of Metal Slug with some new features and modes that make it more fun and challenging. Here are some of the things you need to know about playing Metal Slug: Awakening:

              -

              Gameplay Features and Modes

              -

              Metal Slug: Awakening has various gameplay features and modes that offer different experiences and challenges. Some of them are:

              -
                -
              • Main Dungeon: This is the main mode of the game, where you have to complete missions and stages based on the original Metal Slug games. You can choose from different difficulty levels and earn rewards such as coins, gems, weapons, tanks, and characters.
              • -
              • PVP: This is the mode where you can compete with other players online in real-time battles. You can choose from different modes such as Team Deathmatch , Capture the Flag, and Ultimate Duel. You can also join the Sky Arena, where you can fight in the air with your tanks and planes.
              • -
              • PVE: This is the mode where you can team up with other players or AI allies to fight against waves of enemies and bosses. You can choose from different modes such as Survival, Boss Rush, and Raid. You can also join the World Adventure, where you can explore different regions and collect resources and rewards.
              • -
              -

              Characters and Weapons

              -

              Metal Slug: Awakening has a variety of characters and weapons that you can use to customize your gameplay style. Some of them are:

              -

              metal slug awakening free download android
              -metal slug awakening ios download link
              -metal slug awakening apk mod unlimited money
              -metal slug awakening 3d game download
              -metal slug awakening playstation 1 nostalgia
              -metal slug awakening hd graphics apk
              -metal slug awakening latest version 2023
              -metal slug awakening mobile app game
              -metal slug awakening side-scrolling shooter
              -metal slug awakening update free download
              -metal slug awakening offline mode apk
              -metal slug awakening english language version
              -metal slug awakening apk + obb data
              -metal slug awakening best emulator for android
              -metal slug awakening tips and tricks guide
              -metal slug awakening gameplay video review
              -metal slug awakening characters and weapons
              -metal slug awakening cheats and hacks apk
              -metal slug awakening online multiplayer mode
              -metal slug awakening original soundtrack download
              -metal slug awakening system requirements android
              -metal slug awakening how to install apk
              -metal slug awakening legendary game series
              -metal slug awakening fan-made mod apk
              -metal slug awakening new features and improvements
              -metal slug awakening classic arcade game remake
              -metal slug awakening download size and speed
              -metal slug awakening compatible devices list
              -metal slug awakening ratings and feedbacks
              -metal slug awakening official website and support

              - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
              CharacterAbility
              Marco RossiIncreases the damage of all weapons by 10%.
              Tarma RovingIncreases the durability of all vehicles by 20%.
              Eri KasamotoIncreases the number of grenades by 2.
              Fio GermiIncreases the ammo capacity of all weapons by 20%.
              Nadia CasselIncreases the critical rate of all weapons by 10%.
              Trevor SpaceyIncreases the movement speed by 10%.
              Nadia CasselIncreases the critical rate of all weapons by 10%.
              Ralf JonesCan use a Vulcan Punch that deals massive damage to enemies.
              Clark StillCan use a Super Argentine Backbreaker that throws enemies to the ground.
              Leona HeidernCan use a Moon Slasher that cuts through enemies with a blade of energy.
              Corki The Forest Hunter (New Character)Can use a Bow and Arrow that shoots arrows with different effects.
              -

              Metal Slug: Awakening also has a wide range of weapons that you can collect and upgrade. Some of them are:

              -
                -
              • Heavy Machine Gun: A rapid-fire weapon that can mow down enemies with ease.
              • -
              • Rocket Launcher: A powerful weapon that can launch explosive rockets that deal splash damage.
              • -
              • Laser Gun: A futuristic weapon that can fire a continuous beam of energy that pierces through enemies.
              • -
              • Flame Shot: A fiery weapon that can shoot flames that burn enemies and spread to nearby targets.
              • -
              • Shotgun: A close-range weapon that can shoot pellets that spread out and deal high damage.
              • -
              • Double Machine Gun: A dual-wield weapon that can shoot two streams of bullets at once.
              • -
              • Zantetsu Sword: A melee weapon that can slash enemies with a sharp blade.
              • -
              • Thunder Shot: A shocking weapon that can shoot bolts of electricity that stun enemies and chain to nearby targets.
              • -
              • Ak-74 Machine Gun (New Weapon): A modern weapon that can shoot bullets with high accuracy and damage.
              • -
              • Double Micro Submachine Gun (New Weapon): A compact weapon that can shoot two bursts of bullets at once.
              • -
              • Sniper Rifle (New Weapon): A long-range weapon that can shoot bullets with high precision and damage.
              • -
              • Rocket Propelled Grenade (New Weapon): A heavy weapon that can launch grenades that explode on impact and deal splash damage.
              • -
              • Ice Gun (New Weapon): A cool weapon that can shoot ice crystals that freeze enemies and slow them down.
              • -
              -

              Tips and Tricks

              -

              Metal Slug: Awakening is a game that requires skill, strategy, and reflexes to master. Here are some tips and tricks to help you improve your performance and enjoy the game more:

              -
                -
              • Aim for headshots: Shooting enemies in the head will deal more damage and sometimes cause them to drop items. Try to aim for headshots whenever possible to save ammo and finish enemies faster.
              • -
              • Dodge enemy attacks: Enemies will shoot, throw, or charge at you with various attacks. You can dodge them by jumping, sliding, or moving left or right. You can also use vehicles or obstacles to shield yourself from enemy fire. Dodging enemy attacks will help you avoid taking damage and losing lives.
              • -
              • Use vehicles and animals: Vehicles and animals are special items that you can find or summon in the game. They can help you move faster, deal more damage, and survive longer. For example, you can use a tank to blast enemies with a cannon, a camel to shoot fireballs, or a monkey to throw bananas. However, be careful as vehicles and animals can also be damaged or destroyed by enemy attacks.
              • -
              • Collect items and power-ups: Items and power-ups are scattered throughout the stages or dropped by enemies. They can help you replenish your health, ammo, grenades, or lives. They can also give you temporary boosts such as increased speed, damage, or invincibility. Try to collect as many items and power-ups as you can to enhance your gameplay.
              • -
              • Upgrade your characters and weapons: You can use coins and gems to upgrade your characters and weapons in the game. Upgrading your characters will increase their stats and abilities, while upgrading your weapons will increase their damage and ammo capacity. You can also unlock new characters and weapons by completing missions or stages. Upgrading your characters and weapons will make them more effective and powerful in the game.
              • -
              -

              Conclusion

              -

              Metal Slug: Awakening is a game that brings back the nostalgia of the classic arcade game with a modern twist. It has 3D graphics, smooth animations, and various modes and features that make it a fun and exciting game to play. You can download Metal Slug: Awakening APK for Android from a reliable source and install it on your device easily. You can also play the game with different characters and weapons, and use various tips and tricks to improve your performance and enjoy the game more.

              -

              If you are looking for a game that combines action, adventure, and humor, then Metal Slug: Awakening is the game for you. Download it now and join the battle against the evil forces of General Morden!

              -

              FAQs

              -

              Here are some of the frequently asked questions about Metal Slug: Awakening:

              -
                -
              • Q: Is Metal Slug: Awakening free to play?
              • -
              • A: Yes, Metal Slug: Awakening is free to play, but it also offers in-app purchases that can enhance your gameplay experience.
              • -
              • Q: Is Metal Slug: Awakening available for iOS devices?
              • -
              • A: Yes, Metal Slug: Awakening is available for iOS devices as well as Android devices.
              • -
              • Q: How can I play Metal Slug: Awakening with my friends?
              • -
              • A: You can play Metal Slug: Awakening with your friends by joining the PVP or PVE modes online. You can also invite your friends to join your team or challenge them to a duel.
              • -
              • Q: How can I get more coins and gems in Metal Slug: Awakening?
              • -
              • A: You can get more coins and gems in Metal Slug: Awakening by completing missions and stages, winning battles, collecting items, or buying them with real money.
              • -
              • Q: How can I contact the developers of Metal Slug: Awakening?
              • -
              • A: You can contact the developers of Metal Slug: Awakening by visiting their official website [here] or following their social media accounts [here] and [here].
              • -

              197e85843d
              -
              -
              \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Drive Your Dream Car in the Real World with OculAR APK.md b/spaces/fatiXbelha/sd/Drive Your Dream Car in the Real World with OculAR APK.md deleted file mode 100644 index 2adbc4e75bc36b0d9de15bd9b3ab3ca4e93ac1df..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Drive Your Dream Car in the Real World with OculAR APK.md +++ /dev/null @@ -1,105 +0,0 @@ - -

              OculAR - Drive AR Cars APK: A Review

              -

              Have you ever dreamed of driving your favorite car in the real world, but without spending a fortune or breaking any laws? If so, you might want to check out OculAR - Drive AR Cars APK, a simulation game that lets you drive realistic cars in augmented reality (AR) using your Android device. In this article, we will review OculAR and tell you why you should give it a try.

              -

              Features: What can you do with OculAR

              -

              OculAR is one of the most realistic AR apps available on the Google Play Store for ARCore supported Android devices. It uses modern AR techniques to create immersive and interactive experiences that blend virtual and real worlds. With OculAR, you can:

              -

              ocular-drive ar cars apk


              DOWNLOAD ☆☆☆ https://urllie.com/2uNC7k



              -
                -
              • Drive ultra realistic cars with realistic vehicle physics.
              • -
              • Perform stunts, do ramp jumps, and drift like a pro.
              • -
              • Place ramps, tires, and other objects to create your own tracks and scenarios.
              • -
              • Click pictures and share them with your friends on social media.
              • -
              • Choose from 12+ cars, including sports cars, muscle cars, trucks, and more.
              • -
              -

              The visuals of OculAR are so stunning that they will sometimes make you believe that it's a real car. You can adjust the size, position, and orientation of the car using simple gestures. You can also switch between different camera modes, such as first-person, third-person, or free camera.

              -

              How to download and install OculAR

              -

              Downloading and installing OculAR is very easy. All you need is an ARCore compatible device and an internet connection. Here are the steps to follow:

              -
                -
              1. Go to the Google Play Store and search for OculAR - Drive AR Cars APK or click on this link.
              2. -
              3. Tap on Install and wait for the app to download.
              4. -
              5. Once the app is installed, open it and grant the necessary permissions for camera and storage.
              6. -
              7. Follow the instructions on the screen to scan your surroundings and place a car.
              8. -
              9. Enjoy driving your dream car in AR!
              10. -
              -

              Pros and cons of OculAR

              -

              OculAR is a fun and innovative app that offers a lot of entertainment and excitement for car enthusiasts. However, like any app, it also has some drawbacks. Here are some of the pros and cons of OculAR:

              - - - - - - -
              ProsCons
              - High-quality graphics and sound effects.- Requires a lot of space and battery power.
              - Easy to use and customize.- May not work well in low-light or cluttered environments.
              - Supports both indoor and outdoor modes.- Limited number of cars and objects.
              - Free to download and play.- Contains ads and in-app purchases.
              -

              Conclusion

              -

              OculAR - Drive AR Cars APK is a simulation game that lets you drive realistic cars in augmented reality using your Android device. It has many features that make it fun and engaging, such as realistic physics, stunts, ramps, pictures, and more

              If you are a fan of cars and AR, you should definitely give OculAR a try. It is one of the best AR apps for Android that will make you feel like you are driving a real car in your own environment. You can have fun with your friends, show off your skills, and create amazing memories with OculAR.

              -

              So, what are you waiting for? Download OculAR - Drive AR Cars APK today and enjoy the ultimate AR driving experience. And don't forget to share your feedback and suggestions with the developers. They are always working hard to improve the app and add more features and content.

              -

              ocular-drive ar cars apk download
              -ocular-drive ar cars apk latest version
              -ocular-drive ar cars apk for android
              -ocular-drive ar cars apk free download
              -ocular-drive ar cars apk mod
              -ocular-drive ar cars apk offline
              -ocular-drive ar cars apk update
              -ocular-drive ar cars apk hack
              -ocular-drive ar cars apk full version
              -ocular-drive ar cars apk premium
              -ocular-drive ar cars apk review
              -ocular-drive ar cars apk gameplay
              -ocular-drive ar cars apk features
              -ocular-drive ar cars apk requirements
              -ocular-drive ar cars apk size
              -ocular-drive ar cars apk install
              -ocular-drive ar cars apk emulator
              -ocular-drive ar cars apk online
              -ocular-drive ar cars apk pc
              -ocular-drive ar cars apk windows
              -ocular-drive ar cars apk mac
              -ocular-drive ar cars apk linux
              -ocular-drive ar cars apk chromebook
              -ocular-drive ar cars apk android tv
              -ocular-drive ar cars apk tablet
              -ocular-drive ar cars apk smartphone
              -ocular-drive ar cars apk samsung
              -ocular-drive ar cars apk huawei
              -ocular-drive ar cars apk xiaomi
              -ocular-drive ar cars apk oppo
              -ocular-drive ar cars apk vivo
              -ocular-drive ar cars apk oneplus
              -ocular-drive ar cars apk realme
              -ocular-drive ar cars apk nokia
              -ocular-drive ar cars apk motorola
              -ocular-drive ar cars apk lg
              -ocular-drive ar cars apk sony
              -ocular-drive ar cars apk google pixel
              -ocular-drive ar cars apk asus
              -ocular-drive ar cars apk lenovo
              -ocular-drive ar cars apk acer
              -ocular-drive ar cars apk dell
              -ocular-drive ar cars apk hp
              -ocular-drive ar cars apk msi
              -ocular-drive ar cars apk razer
              -ocular-drive ar cars apk tcl
              -ocular-drive ar cars apk hisense
              -ocular-drive ar cars apk philips
              -ocular-drive ar cars apk sharp

              -

              FAQs

              -

              Here are some of the frequently asked questions about OculAR:

              -

              Q1: What are the requirements for running OculAR?

              -

              A1: To run OculAR, you need an Android device that supports ARCore, which is Google's platform for building AR experiences. You can check the list of ARCore supported devices here. You also need a stable internet connection and enough storage space on your device.

              -

              Q2: How realistic are the cars in OculAR?

              -

              A2: The cars in OculAR are very realistic and detailed. They are modeled after real-life cars and have accurate proportions, colors, textures, and sounds. You can also see the interior of the cars and interact with the steering wheel, pedals, and dashboard.

              -

              Q3: How can I take pictures and share them with my friends?

              -

              A3: Taking pictures and sharing them with your friends is very easy in OculAR. You just need to tap on the camera icon on the top right corner of the screen and choose whether you want to take a screenshot or a video. Then, you can edit your picture or video using filters, stickers, text, and more. Finally, you can share your picture or video with your friends on social media platforms such as Facebook, Instagram, WhatsApp, etc.

              -

              Q4: What are some of the stunts and ramps that I can use in OculAR?

              -

              A4: OculAR has many stunts and ramps that you can use to make your driving more fun and exciting. You can find them in the objects menu on the bottom left corner of the screen. Some of the stunts and ramps that you can use are:

              -
                -
              • Loop: A circular ramp that lets you do a 360-degree loop.
              • -
              • Quarter Pipe: A curved ramp that lets you do a vertical jump.
              • -
              • Half Pipe: A U-shaped ramp that lets you do a backflip or a frontflip.
              • -
              • Ramp: A straight ramp that lets you do a long jump.
              • -
              • Bridge: A bridge that lets you cross over a gap or an obstacle.
              • -
              -

              Q5: How can I get more cars and objects in OculAR?

              -

              A5: To get more cars and objects in OculAR, you need to earn coins by driving, performing stunts, taking pictures, and watching ads. You can also buy coins using real money through in-app purchases. Then, you can use your coins to unlock new cars and objects from the shop menu on the top left corner of the screen.

              401be4b1e0
              -
              -
              \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Call of Duty Mobile APK Terbaru for Android - Enjoy the Best FPS Experience on Your Phone.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Call of Duty Mobile APK Terbaru for Android - Enjoy the Best FPS Experience on Your Phone.md deleted file mode 100644 index 6c74ed7175adf27d334c619575f498f9f8a1ef88..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Call of Duty Mobile APK Terbaru for Android - Enjoy the Best FPS Experience on Your Phone.md +++ /dev/null @@ -1,84 +0,0 @@ - -

              How to Download Call of Duty Mobile APK Terbaru

              -

              If you are a fan of first-person shooter games, you have probably heard of Call of Duty Mobile, one of the most popular and successful mobile games in the world. Call of Duty Mobile is a free-to-play game that brings the thrill and excitement of the Call of Duty franchise to your smartphone. You can play as iconic characters from the series, such as Captain Price, Ghost, Soap, and more, and compete in various multiplayer modes and battle royale on classic maps like Nuketown, Crash, and Hijacked.

              -

              download call of duty mobile apk terbaru


              DOWNLOADhttps://gohhs.com/2uPttp



              -

              But did you know that there is a way to enjoy the latest features and updates of Call of Duty Mobile without waiting for the official release on Google Play Store? Yes, you can download the Call of Duty Mobile APK Terbaru, which is the newest version of the game that has been modified and optimized for Android devices. In this article, we will show you how to download and install Call of Duty Mobile APK Terbaru, as well as some tips and tricks to improve your gameplay. Let's get started!

              -

              Features of Call of Duty Mobile APK Terbaru

              -

              Call of Duty Mobile APK Terbaru is not just a regular update, but a major overhaul that adds new features, modes, maps, characters, weapons, and more to the game. Here are some of the highlights:

              -

              Multiplayer Modes

              -

              Call of Duty Mobile APK Terbaru offers a variety of multiplayer modes that cater to different play styles and preferences. You can choose from Team Deathmatch, Domination, Kill-Confirmed, Search and Destroy, Hardpoint, Free-for-All, Gun Game, Capture the Flag, and more. You can also customize your match settings, such as time limit, score limit, number of players, etc. Whether you want a fast-paced action or a strategic challenge, you will find a mode that suits you.

              -

              download call of duty mobile season 5 apk
              -download call of duty mobile legends of war apk
              -download call of duty mobile apk from uptodown
              -download call of duty mobile apk and obb
              -download call of duty mobile apk mod menu
              -download call of duty mobile apk for android 5.0
              -download call of duty mobile apk latest version 2023
              -download call of duty mobile apk offline mode
              -download call of duty mobile apk highly compressed
              -download call of duty mobile apk no verification
              -download call of duty mobile apk on pc
              -download call of duty mobile apk from google play
              -download call of duty mobile apk with zombies mode
              -download call of duty mobile apk for ios devices
              -download call of duty mobile apk hack unlimited money
              -download call of duty mobile apk for free today
              -download call of duty mobile apk update file
              -download call of duty mobile apk without vpn
              -download call of duty mobile apk from official website
              -download call of duty mobile apk with controller support
              -download call of duty mobile apk for android 4.4
              -download call of duty mobile apk full version
              -download call of duty mobile apk mirror link
              -download call of duty mobile apk for emulator
              -download call of duty mobile apk new map
              -download call of duty mobile apk data file
              -download call of duty mobile apk in india
              -download call of duty mobile apk without play store
              -download call of duty mobile apk with voice chat
              -download call of duty mobile apk for android tv
              -download call of duty mobile apk beta version
              -download call of duty mobile apk from apkpure
              -download call of duty mobile apk with hd graphics
              -download call of duty mobile apk for low end devices
              -download call of duty mobile apk direct link
              -download call of duty mobile apk original file
              -download call of duty mobile apk in pakistan
              -download call of duty mobile apk without root
              -download call of duty mobile apk with multiplayer mode
              -download call of duty mobile apk for tablet devices

              -

              Battle Royale Mode

              -

              If you are looking for a more immersive and intense experience, you can try the Battle Royale mode in Call of Duty Mobile APK Terbaru. This mode pits you against 99 other players in a massive map that shrinks over time. You can choose to play solo, duo, or squad mode, and select your class from Medic, Scout, Ninja, Clown, Defender, Mechanic, Airborne, or Hacker. You can also find vehicles, weapons, perks, loot boxes, air drops, and other items to help you survive. The last one standing wins!

              -

              Characters and Weapons

              -

              One of the

              One of the best things about Call of Duty Mobile APK Terbaru is that you can play as your favorite characters from the Call of Duty universe, such as Captain Price, Ghost, Soap, Alex Mason, Frank Woods, John "Soap" MacTavish, and more. You can also unlock new skins, outfits, and accessories for your characters by completing missions, challenges, and events. You can also customize your weapons with different attachments, camos, stickers, and charms. You can choose from a wide range of weapons, such as assault rifles, sniper rifles, shotguns, SMGs, LMGs, pistols, launchers, melee weapons, and more.

              -

              How to Download and Install Call of Duty Mobile APK Terbaru

              -

              Now that you know the features of Call of Duty Mobile APK Terbaru, you might be wondering how to download and install it on your device. Don't worry, it's very easy and simple. Just follow these steps:

              -

              Step 1: Download the APK file from a trusted source

              -

              The first thing you need to do is to download the APK file of Call of Duty Mobile APK Terbaru from a reliable and secure source. You can use this link to download the file directly to your device. The file size is about 1.8 GB, so make sure you have enough storage space and a stable internet connection.

              -

              Step 2: Enable unknown sources on your device settings

              -

              The next thing you need to do is to enable unknown sources on your device settings. This will allow you to install apps that are not from the Google Play Store. To do this, go to your device settings > security > unknown sources > toggle on. You might see a warning message, but don't worry, it's safe to proceed.

              -

              Step 3: Install the APK file and launch the game

              -

              The final step is to install the APK file and launch the game. To do this, locate the downloaded file on your device and tap on it. You will see a prompt asking you to install the app. Tap on install and wait for the process to finish. Once done, you can open the app and enjoy playing Call of Duty Mobile APK Terbaru!

              -

              Tips and Tricks for Playing Call of Duty Mobile APK Terbaru

              -

              Now that you have successfully downloaded and installed Call of Duty Mobile APK Terbaru, you might want some tips and tricks to improve your gameplay and win more matches. Here are some of them:

              -

              Adjust your sensitivity and controls for optimal performance

              -

              One of the most important things to do before playing Call of Duty Mobile APK Terbaru is to adjust your sensitivity and controls for optimal performance. You can do this by going to the settings > controls > custom layout. Here you can change the size, position, and opacity of your buttons, as well as the sensitivity of your aim, movement, and firing. You can also choose between simple mode (auto-fire) or advanced mode (manual fire) depending on your preference.

              -

              Use headphones and voice chat to communicate with your teammates

              -

              Another tip for playing Call of Duty Mobile APK Terbaru is to use headphones and voice chat to communicate with your teammates. This will help you coordinate your strategies, call out enemy locations, request backup, and more. You can also use the quick chat feature to send pre-set messages or emojis to your team or opponents. To use voice chat or quick chat, just tap on the microphone or chat icon on the top left corner of your screen.

              -

              Learn the maps and the best spots to hide, snipe, and ambush

              -

              The last tip for playing Call of Duty Mobile APK Terbaru is to learn the maps and the best spots to hide, snipe, and ambush. Each map has its own layout, terrain, buildings, vehicles, and other features that can affect your gameplay. You should familiarize yourself with each map and find out where are the best places to take cover, snipe enemies from afar, or ambush them from behind. You can also use vehicles such as helicopters or tanks to move around faster or deal more damage.

              -

              Conclusion

              -

              In conclusion, Call of Duty Mobile APK Terbaru is a great way to enjoy the latest features and updates of one of the most popular mobile games in the world. You can download and install it easily by following our guide above. You can also improve your gameplay by following our tips and tricks above. We hope you have fun playing Call of Duty Mobile APK Terbaru!

              -

              FAQs

              -

              Q1: Is Call of Duty Mobile APK Terbaru safe to download?

              -

              A1: Yes, Call of Duty Mobile APK

              A1: Yes, Call of Duty Mobile APK Terbaru is safe to download as long as you use a trusted and secure source. We recommend using this link to download the file directly to your device. However, you should always be careful when downloading and installing apps from unknown sources, as they might contain malware or viruses that can harm your device or data.

              -

              Q2: How much space does Call of Duty Mobile APK Terbaru require?

              -

              A2: Call of Duty Mobile APK Terbaru requires about 1.8 GB of storage space on your device. You should also have enough free space for additional data and updates that the game might need. You can check your available storage space by going to your device settings > storage.

              -

              Q3: Can I play Call of Duty Mobile APK Terbaru on PC?

              -

              A3: Yes, you can play Call of Duty Mobile APK Terbaru on PC by using an Android emulator. An emulator is a software that allows you to run Android apps on your PC. There are many emulators available online, such as BlueStacks, NoxPlayer, LDPlayer, etc. You can download and install any of them on your PC, and then download and install Call of Duty Mobile APK Terbaru on the emulator. However, you should note that playing Call of Duty Mobile APK Terbaru on PC might give you an unfair advantage over other players who are playing on mobile devices, as you can use a keyboard and mouse instead of a touchscreen.

              -

              Q4: How can I update Call of Duty Mobile APK Terbaru?

              -

              A4: To update Call of Duty Mobile APK Terbaru, you need to download and install the latest version of the APK file from the same source that you used before. You can use this link to download the file directly to your device. You don't need to uninstall the previous version, as the new version will overwrite it. However, you should always backup your game data before updating, in case something goes wrong.

              -

              Q5: Where can I find more information about Call of Duty Mobile APK Terbaru?

              -

              A5: You can find more information about Call of Duty Mobile APK Terbaru by visiting the official website of the game, or by following its social media accounts on Facebook, Twitter, Instagram, YouTube, etc. You can also join the official Discord server or Reddit community of the game, where you can interact with other players, get news and updates, share feedback and suggestions, report bugs and issues, and more.

              401be4b1e0
              -
              -
              \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download That File in Minutes with These Simple Steps.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download That File in Minutes with These Simple Steps.md deleted file mode 100644 index 3dab59fe96083a1b12ef2e9eea10f1d267d1a989..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download That File in Minutes with These Simple Steps.md +++ /dev/null @@ -1,121 +0,0 @@ -
              -

              How to Download Files from the Internet

              -

              Downloading files from the internet is a common and useful task that you may need to do for various purposes, such as getting music, videos, documents, software, or images. However, downloading files can also be challenging, especially if you have to deal with large, multiple, or broken downloads. That's why you need a download manager to help you manage your downloads and make them faster, easier, and more reliable.

              -

              how do you download that


              Download Zip ►►►►► https://gohhs.com/2uPsA6



              -

              What is a Download Manager and Why You Need One

              -

              A download manager is a software tool that can monitor and intercept downloads from web browsers, but can also work independently. A download manager can offer many benefits over using your browser's built-in download function, such as:

              -

              Benefits of Using a Download Manager

              -
                -
              • It can speed up your downloads by using multiple connections and splitting files into smaller parts.
              • -
              • It can resume your downloads if they are interrupted by network problems, power outages, or computer crashes.
              • -
              • It can organize your downloads by categories, folders, or priorities.
              • -
              • It can preview your downloads before they are completed, such as playing audio or video files or viewing images.
              • -
              • It can convert your downloads to different formats, such as MP3, MP4, AVI, or ZIP.
              • -
              • It can scan your downloads for viruses or malware and ensure their safety.
              • -
              -

              Types of Files You Can Download

              -

              There are many types of files you can download from the internet, depending on your needs and preferences. Some of the most common ones are:

              -
                -
              • Music files, such as MP3, WAV, or FLAC.
              • -
              • Video files, such as MP4, AVI, or MKV.
              • -
              • Document files, such as PDF, DOCX, or TXT.
              • -
              • Software files, such as EXE, MSI, or APK.
              • -
              • Image files, such as JPG, PNG, or GIF.
              • -
              -

              How to Choose the Best Download Manager for Your Needs

              -

              There are many download managers available for Windows users, but not all of them are created equal. Some may have more features than others, some may be easier to use than others, and some may be more compatible with your browser than others. To choose the best download manager for your needs, you should consider the following factors:

              -

              Features to Look for in a Download Manager

              -
                -
              • The speed and reliability of the downloads. You want a download manager that can make your downloads faster and more stable by using multiple connections and resuming broken downloads.
              • -
              • The interface and usability of the download manager. You want a download manager that has a simple and intuitive interface that lets you easily access and control your downloads.
              • -
              • The compatibility and integration with your browser. You want a download manager that can work well with your preferred browser and can automatically capture the download links from web pages.
              • -
              • The security and privacy of the download manager. You want a download manager that can protect your downloads from viruses or malware and can encrypt your data if needed.
              • -
              • The customization and flexibility of the download manager. You want a download manager that can be customized to suit your preferences and needs, such as changing the settings, themes, or languages.
              • -
              -

              Top Free Download Managers for Windows

              -

              To help you choose the best download manager for your needs, we have compiled a list of some of the top free download managers for Windows that offer most of the features mentioned above. Here they are:

              - -
              NameDescriptionFree Download Manager -

              Free Download Manager is a powerful and easy-to-use download manager that can handle all kinds of downloads, from torrents to videos. It can accelerate your downloads by up to 10 times, resume broken downloads, and schedule your downloads for later. It also has a built-in media converter, a video downloader, and a browser extension that integrates with Chrome, Firefox, Edge, and Opera. Free Download Manager is free and open-source, and supports Windows, Mac, and Linux.

              -

              How to download a file from the Internet
              -How to download an app, file, or program from the web
              -How to download files from the web using different browsers
              -How to download images, videos, and audio clips from the web
              -How to download PDF files from the web and open them
              -How to download and install programs from the web
              -How to download and run apps from the web
              -How to download and save documents from the web
              -How to download and view pictures from the web
              -How to download browser extensions and toolbars from the web
              -How to download files from the web on a computer
              -How to download files from the web on a smartphone or tablet
              -How to download files from the web on a Chromebook
              -How to download files from the web using Google Chrome
              -How to download files from the web using Mozilla Firefox
              -How to download files from the web using Internet Explorer
              -How to download files from the web using Microsoft Edge
              -How to download files from the web using Opera
              -How to download files from the web using Safari
              -How to change the default download location on your PC
              -How to find files you've downloaded on your PC
              -How to open and run downloaded files on your PC
              -How to save and rename downloaded files on your PC
              -How to delete downloaded files on your PC
              -How to pause and resume downloads on your PC
              -How to manage downloads in Download Manager on your PC
              -How to view downloads in Library on your PC
              -How to clear downloads history on your PC
              -How to protect your PC from malicious downloads
              -How to scan downloaded files for viruses and malware on your PC
              -How to troubleshoot downloading problems on your PC
              -How to fix broken or corrupted downloads on your PC
              -How to resume interrupted or incomplete downloads on your PC
              -How to speed up slow downloads on your PC
              -How to download multiple files at once on your PC
              -How to download large files faster on your PC
              -How to download files in the background on your PC
              -How to download files securely and privately on your PC
              -How to download files anonymously on your PC
              -How to download files without leaving any traces on your PC

              -

              Ninja Download Manager

              -

              Ninja Download Manager is a sleek and modern download manager that can boost your download speed by using multiple connections and splitting files into chunks. It can also resume your downloads from where they left off, even if the server does not support it. It has a user-friendly interface that lets you manage your downloads with drag-and-drop, pause and resume, and preview features. It also has a video downloader that can capture videos from popular sites like YouTube and Vimeo. Ninja Download Manager is free for personal use, and supports Windows and Mac.

              -

              JDownloader

              -

              JDownloader is a download manager that specializes in downloading files from one-click hosting sites like Rapidshare, Megaupload, Mediafire, and others. It can bypass the captchas and wait times that these sites impose, and can download multiple files at once with premium accounts. It also has a link grabber that can scan web pages for download links, a clipboard monitor that can detect copied links, and a remote control feature that lets you manage your downloads from another device. JDownloader is free and open-source, and supports Windows, Mac, Linux, and other platforms.

              How to Use a Download Manager to Download Files

              -

              Now that you have chosen the best download manager for your needs, you can start using it to download files from the internet. The exact steps may vary depending on the download manager you use, but the general process is similar. Here are the steps to download files with a download manager:

              -

              Steps to Download Files with a Download Manager

              -
                -
              1. Install and launch the download manager on your computer.
              2. -
              3. Copy the URL of the file you want to download from your browser or any other source.
              4. -
              5. Paste the URL into the download manager's input box or use the browser extension to capture the link automatically.
              6. -
              7. Choose the destination folder, file name, and other settings for your download.
              8. -
              9. Click on the start or download button to begin your download.
              10. -
              11. Monitor the progress and status of your download in the download manager's interface.
              12. -
              13. Once the download is completed, you can open, play, or view the file from the download manager or from the destination folder.
              14. -
              -

              Tips and Tricks to Optimize Your Downloads

              -

              To make the most of your download manager and your downloads, you can follow these tips and tricks:

              -
                -
              • Adjust the number of connections and the bandwidth limit according to your network speed and availability.
              • -
              • Use a VPN or a proxy server to bypass geo-restrictions or censorship that may prevent you from downloading certain files.
              • -
              • Use a checksum or a hash function to verify the integrity and authenticity of your downloads and avoid corrupted or tampered files.
              • -
              • Use a scheduler or a timer to start or stop your downloads at specific times or intervals.
              • -
              • Use filters or rules to sort your downloads by categories, types, sizes, dates, or other criteria.
              • -
              -

              Conclusion

              -

              Downloading files from the internet is a common and useful task that can be made easier and faster with a download manager. A download manager can offer many benefits over using your browser's built-in download function, such as speeding up your downloads, resuming broken downloads, organizing your downloads, previewing your downloads, converting your downloads, and scanning your downloads. There are many types of files you can download from the internet, such as music, video, document, software, or image files. To choose the best download manager for your needs, you should consider the features, interface, compatibility, security, and customization of the download manager. Some of the top free download managers for Windows are Free Download Manager, Ninja Download Manager, and JDownloader. To use a download manager to download files, you need to copy and paste the URL of the file into the download manager or use the browser extension to capture it automatically. Then you need to choose the destination folder, file name, and other settings for your download. Finally, you need to start your download and monitor its progress and status. To optimize your downloads, you can adjust the number of connections and the bandwidth limit, use a VPN or a proxy server, use a checksum or a hash function, use a scheduler or a timer, and use filters or rules.

              -

              FAQs

              -

              Here are some frequently asked questions about downloading files from the internet:

              -
                -
              1. What is the difference between downloading and streaming?
                -Downloading is when you save a file from the internet to your computer or device for later use. Streaming is when you play a file from the internet without saving it to your computer or device. Downloading usually requires more storage space and bandwidth than streaming, but it also allows you to access the file offline and without interruptions.
              2. -
              3. How can I resume a failed or interrupted download?
                -If your download fails or is interrupted due to network problems, power outages, or computer crashes, you can try to resume it with your download manager. Most download managers have a resume feature that can continue your download from where it left off. However, this may not work if the server does not support resuming downloads or if the file has been removed or changed.
              4. -
              5. How can I speed up my downloads?
                -There are several ways to speed up your downloads, such as using a faster internet connection, closing other applications that use bandwidth, clearing your browser cache and cookies, using multiple connections and splitting files into smaller parts with your download manager, using a VPN or a proxy server to bypass throttling or congestion, and downloading files from reliable and fast servers.
              6. -
              7. How can I protect my downloads from viruses or malware?
                -There are several ways to protect your downloads from viruses or malware, such as using a reputable and updated antivirus software on your computer or device, using a download manager that can scan your downloads for viruses or malware before or after downloading them, and avoiding downloading files from suspicious or unknown sources or links.
              8. -
              9. How can I convert my downloads to different formats?
                -There are several ways to convert your downloads to different formats, such as using a download manager that has a built-in media converter that can change the format of your audio, video, or image files, using an online converter that can upload and convert your files to various formats, or using a standalone converter that can install and run on your computer or device and convert your files offline.
              10. -

              401be4b1e0
              -
              -
              \ No newline at end of file diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/send/node_modules/ms/license.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/send/node_modules/ms/license.md deleted file mode 100644 index fa5d39b6213f8a5e142b643575f99d9149cc71c6..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/send/node_modules/ms/license.md +++ /dev/null @@ -1,21 +0,0 @@ -The MIT License (MIT) - -Copyright (c) 2020 Vercel, Inc. - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. diff --git a/spaces/fffiloni/lama-video-watermark-remover/README.md b/spaces/fffiloni/lama-video-watermark-remover/README.md deleted file mode 100644 index 5cb779ba2b31cbb0bdd23a1fb74716caa3b0fae9..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/lama-video-watermark-remover/README.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -title: LaMa Video Watermark Remover -emoji: 🌖 -colorFrom: red -colorTo: green -sdk: gradio -sdk_version: 3.0.24 -python_version: 3.7.13 -app_file: app.py -pinned: false -duplicated_from: fffiloni/lama ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`python_version`: string -Any valid Python 3.x or 3.x.x version. -Defaults to 3.8.9. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/fffiloni/video2openpose2/app.py b/spaces/fffiloni/video2openpose2/app.py deleted file mode 100644 index 73a129fe57a43ff8dd20cd2325e82b1df5f2d6ae..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/video2openpose2/app.py +++ /dev/null @@ -1,135 +0,0 @@ -import gradio as gr -from controlnet_aux import OpenposeDetector -import os -import cv2 -import numpy as np -from PIL import Image -from moviepy.editor import * - -openpose = OpenposeDetector.from_pretrained('lllyasviel/ControlNet') - -def get_frames(video_in): - frames = [] - #resize the video - clip = VideoFileClip(video_in) - - #check fps - if clip.fps > 30: - print("vide rate is over 30, resetting to 30") - clip_resized = clip.resize(height=512) - clip_resized.write_videofile("video_resized.mp4", fps=30) - else: - print("video rate is OK") - clip_resized = clip.resize(height=512) - clip_resized.write_videofile("video_resized.mp4", fps=clip.fps) - - print("video resized to 512 height") - - # Opens the Video file with CV2 - cap= cv2.VideoCapture("video_resized.mp4") - - fps = cap.get(cv2.CAP_PROP_FPS) - print("video fps: " + str(fps)) - i=0 - while(cap.isOpened()): - ret, frame = cap.read() - if ret == False: - break - cv2.imwrite('kang'+str(i)+'.jpg',frame) - frames.append('kang'+str(i)+'.jpg') - i+=1 - - cap.release() - cv2.destroyAllWindows() - print("broke the video into frames") - - return frames, fps - -def get_openpose_filter(i): - image = Image.open(i) - - #image = np.array(image) - - image = openpose(image) - #image = Image.fromarray(image) - image.save("openpose_frame_" + str(i) + ".jpeg") - return "openpose_frame_" + str(i) + ".jpeg" - -def create_video(frames, fps, type): - print("building video result") - clip = ImageSequenceClip(frames, fps=fps) - clip.write_videofile(type + "_result.mp4", fps=fps) - - return type + "_result.mp4" - -def convertG2V(imported_gif): - clip = VideoFileClip(imported_gif.name) - clip.write_videofile("my_gif_video.mp4") - return "my_gif_video.mp4" - -def infer(video_in): - - - # 1. break video into frames and get FPS - break_vid = get_frames(video_in) - frames_list= break_vid[0] - fps = break_vid[1] - #n_frame = int(trim_value*fps) - n_frame = len(frames_list) - - if n_frame >= len(frames_list): - print("video is shorter than the cut value") - n_frame = len(frames_list) - - # 2. prepare frames result arrays - result_frames = [] - print("set stop frames to: " + str(n_frame)) - - for i in frames_list[0:int(n_frame)]: - openpose_frame = get_openpose_filter(i) - result_frames.append(openpose_frame) - print("frame " + i + "/" + str(n_frame) + ": done;") - - - final_vid = create_video(result_frames, fps, "openpose") - - files = [final_vid] - - return final_vid, files - -title=""" -
              -
              -

              - Video to OpenPose -

              -
              - -
              -""" - -with gr.Blocks() as demo: - with gr.Column(): - gr.HTML(title) - with gr.Row(): - with gr.Column(): - video_input = gr.Video(source="upload", type="filepath") - gif_input = gr.File(label="import a GIF instead", file_types=['.gif']) - gif_input.change(fn=convertG2V, inputs=gif_input, outputs=video_input) - submit_btn = gr.Button("Submit") - - with gr.Column(): - video_output = gr.Video() - file_output = gr.Files() - - submit_btn.click(fn=infer, inputs=[video_input], outputs=[video_output, file_output]) - -demo.launch() \ No newline at end of file diff --git a/spaces/firefighter/PdfSumGPT/utils/chatgpt.py b/spaces/firefighter/PdfSumGPT/utils/chatgpt.py deleted file mode 100644 index af350471885e97b7e18652cc624b06e3e6b8eb06..0000000000000000000000000000000000000000 --- a/spaces/firefighter/PdfSumGPT/utils/chatgpt.py +++ /dev/null @@ -1,43 +0,0 @@ -import random - -import openai - - -class ChatGPTAPI: - def __init__(self, api_key: str = '', max_input_length: int = 1024): - openai.api_key = self.load_api_key(api_key) - self.max_input_length = max_input_length - - @staticmethod - def load_api_key(api_key: str): - if not api_key: - try: - api_key = open('data/api_key.txt', 'r').read() - except Exception as e: - raise Exception(f'ChatGPT Error: No API key provided {e}') - - if '\n' in api_key: - api_key_list = api_key.strip().split('\n') - api_key = random.choice(api_key_list) - return api_key - - def __call__(self, content: str): - assert isinstance(content, str), 'ChatGPT Error: content must be a string' - content = content.strip() - messages = [{'role': 'user', 'content': content}] - try: - resp = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=messages - ) - output: str = resp['choices'][0]['message']['content'] - output = output.strip() - except Exception as e: - raise Exception(f'ChatGPT Error: {e}') - return output - - -if __name__ == '__main__': - chatgpt = ChatGPTAPI() - response = chatgpt('Hello, how are you?') - print(response) diff --git a/spaces/flax-community/code-clippy-problem-solver/app.py b/spaces/flax-community/code-clippy-problem-solver/app.py deleted file mode 100644 index 085638973f2086140019657ad2f27409796bc6a1..0000000000000000000000000000000000000000 --- a/spaces/flax-community/code-clippy-problem-solver/app.py +++ /dev/null @@ -1,160 +0,0 @@ -import urllib - -import streamlit as st -from transformers import AutoModelForCausalLM, AutoTokenizer - -# model_name = "flax-community/gpt-neo-1.3B-apps-all" -model_name = "flax-community/gpt-neo-125M-apps-all" - - -@st.cache(allow_output_mutation=True, max_entries=1) -def get_model(): - model = AutoModelForCausalLM.from_pretrained(model_name) - tokenizer = AutoTokenizer.from_pretrained(model_name) - tokenizer.pad_token = tokenizer.eos_token - return (model, tokenizer) - - -def format_input(question, starter_code=""): - answer_type = ( - "\nUse Call-Based format\n" if starter_code else "\nUse Standard Input format\n" - ) - return f"\nQUESTION:\n{question}\n{starter_code}\n{answer_type}\nANSWER:\n" - - -def clean_text(generation): - # clean up text has discussed in OpenAI's paper "Evaluating Large Language Models Trained on Code" - generation = generation.split("\ndef")[0] - generation = generation.split("\nclass")[0] - generation = generation.split("\n#")[0] - generation = generation.split("\nif")[0] - - return generation - - -def generate_solution( - model, tokenizer, question, starter_code="", temperature=1.0, num_beams=1 -): - prompt = format_input(question, starter_code) - input_ids = tokenizer(prompt, return_tensors="pt").input_ids - start = len(input_ids[0]) - - output = model.generate( - input_ids, - max_length=start + 150, - do_sample=True, - top_p=0.95, - pad_token_id=tokenizer.pad_token_id, - eos_token_id=tokenizer.eos_token_id, - early_stopping=True, - temperature=temperature, - num_beams=int(num_beams), - no_repeat_ngram_size=None, - repetition_penalty=None, - num_return_sequences=None, - ) - output_str = tokenizer.decode(output[0][start:], skip_special_tokens=True).strip() - output_str = clean_text(output_str) - - return output_str - - -_EXAMPLES = [ - [ - """ -Given a 2D list of size `m * n`. Your task is to find the sum of minimum value in each row. -For Example: -```python -[ - [1, 2, 3, 4, 5], # minimum value of row is 1 - [5, 6, 7, 8, 9], # minimum value of row is 5 - [20, 21, 34, 56, 100] # minimum value of row is 20 -] -``` -So, the function should return `26` because sum of minimums is as `1 + 5 + 20 = 26` - """, - "", - 0.8, - ], - [ - """ -# Personalized greeting - -Create a function that gives a personalized greeting. This function takes two parameters: `name` and `owner`. - """, - """ -Use conditionals to return the proper message: - -case| return ---- | --- -name equals owner | 'Hello boss' -otherwise | 'Hello guest' -def greet(name, owner): - """, - 0.8, - ], -] - - -def run(): - st.set_page_config(page_title="Code Clippy Problem Solver") - # sidebar - st.sidebar.title("Code Clippy") - st.sidebar.image( - "https://raw.githubusercontent.com/ncoop57/gpt-code-clippy/camera-ready/code_clippy_logo.jpg", - caption="(c) awesome Aimee Trevett", - ) - st.sidebar.markdown("[Github](https://github.com/ncoop57/gpt-code-clippy)") - st.sidebar.markdown("[Report](https://github.com/ncoop57/gpt-code-clippy/wiki)") - - st.sidebar.markdown("### Controls:") - - temperature = st.sidebar.slider( - "Temperature", - min_value=0.5, - max_value=1.5, - value=0.8, - step=0.1, - ) - num_beams = st.sidebar.slider( - "Num beams", - min_value=1, - max_value=4, - step=1, - ) - - # main body - model, tokenizer = get_model() - - question = st.text_input( - "Problem: ", - value="A function that can greet user by name. Given a name it should say hello to user.", - help="Text description of the coding problem to be solved", - ) - starter_code = st.text_input( - "Started code: ", value="def greet(name):", help="Optional starter code" - ) - submit_button = st.button("Solve") - - if submit_button: - text = st.text("Generating solution...") - # gif from https://giphy.com/gifs/alan-DfSXiR60W9MVq - gif_runner = st.image("./loading.gif") - output = generate_solution( - model, tokenizer, question, starter_code, temperature, num_beams - ) - text.empty() - gif_runner.empty() - - st.text("Solution:") - st.code(output, language="python") - - # Create link to carbon to make a nice screenshot of the generated code - url_code = urllib.parse.quote(f"# {question}\n{output}") - st.markdown( - f"[Would you like a Carbon Copy?](https://carbon.now.sh/?bg=rgba%280%2C0%2C0%2C0%29&t=seti&wt=none&l=python&ds=false&dsyoff=20px&dsblur=68px&wc=true&wa=false&pv=56px&ph=56px&ln=false&fl=1&fm=Hack&fs=14px&lh=133%25&si=false&es=2x&wm=false&code={url_code})" - ) - - -if __name__ == "__main__": - run() \ No newline at end of file diff --git a/spaces/flowers-team/Interactive_DeepRL_Demo/js/ui_state/components/about_tab.js b/spaces/flowers-team/Interactive_DeepRL_Demo/js/ui_state/components/about_tab.js deleted file mode 100644 index c1cab808bb91c444e053d77ea66e18523f39923c..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/Interactive_DeepRL_Demo/js/ui_state/components/about_tab.js +++ /dev/null @@ -1,54 +0,0 @@ -import Component from '../lib/component.js'; -import store from '../store/index.js'; - -/** - * @classdesc UI component for "About..." tab. - */ -export default class AboutTab extends Component { - - /** - * @constructor - */ - constructor() { - super({ - store, - element: document.querySelector('#about-tab'), - eventName: 'aboutTabChange' - }); - } - - /** - * Renders the global UI elements. - */ - render() { - let dict = window.lang_dict[store.state.language]['aboutTab']; - - // Purpose section - this.element.querySelector('#purpose-title').innerHTML = dict['purposeTitle']; - this.element.querySelector('#purpose-text').innerHTML = dict['purposeText']; - - // RL section - this.element.querySelector('#rl-title').innerHTML = dict['rlTitle']; - this.element.querySelector('#rl-text').innerHTML = dict['rlText']; - - // DRL section - this.element.querySelector('#drl-title').innerHTML = dict['drlTitle']; - this.element.querySelector('#drl-text').innerHTML = dict['drlText']; - - // ACL section - this.element.querySelector('#acl-title').innerHTML = dict['aclTitle']; - this.element.querySelector('#acl-text').innerHTML = dict['aclText']; - - // About demo section - this.element.querySelector('#about-demo-title').innerHTML = dict['aboutDemoTitle']; - this.element.querySelector('#about-demo-text').innerHTML = dict['aboutDemoText']; - - // Credits section - this.element.querySelector('#credits-title').innerHTML = dict['creditsTitle']; - this.element.querySelector('#credits-text').innerHTML = dict['creditsText']; - - // References section - this.element.querySelector('#references-title').innerHTML = dict['referencesTitle']; - this.element.querySelector('#references-text').innerHTML = dict['referencesText']; - } -}; \ No newline at end of file diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/backup_envs/helper.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/backup_envs/helper.py deleted file mode 100644 index 0a3df741f3a6118297898574e4a7bf6921272038..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/backup_envs/helper.py +++ /dev/null @@ -1,295 +0,0 @@ -import numpy as np - -from gym_minigrid.minigrid import * -from gym_minigrid.register import register - -import time -from collections import deque - - -class Peer(NPC): - """ - A dancing NPC that the agent has to copy - """ - - def __init__(self, color, name, env): - super().__init__(color) - self.name = name - self.npc_dir = 1 # NPC initially looks downward - self.npc_type = 0 - self.env = env - self.npc_actions = [] - self.dancing_step_idx = 0 - self.actions = MiniGridEnv.Actions - self.add_npc_direction = True - self.available_moves = [self.rotate_left, self.rotate_right, self.go_forward, self.toggle_action] - - selected_door_id = self.env._rand_elem([0, 1]) - self.selected_door_pos = [self.env.door_pos_top, self.env.door_pos_bottom][selected_door_id] - self.selected_door = [self.env.door_top, self.env.door_bottom][selected_door_id] - self.joint_attention_achieved = False - - def can_overlap(self): - # If the NPC is hidden, agent can overlap on it - return self.env.hidden_npc - - def encode(self, nb_dims=3): - if self.env.hidden_npc: - if nb_dims == 3: - return (1, 0, 0) - elif nb_dims == 4: - return (1, 0, 0, 0) - else: - return super().encode(nb_dims=nb_dims) - - def step(self): - - distance_to_door = np.abs(self.selected_door_pos - self.cur_pos).sum(-1) - - if all(self.front_pos == self.selected_door_pos) and self.selected_door.is_open: - # in front of door - self.go_forward() - - elif distance_to_door == 1 and not self.joint_attention_achieved: - # before turning to the door look at the agent - wanted_dir = self.compute_wanted_dir(self.env.agent_pos) - act = self.compute_turn_action(wanted_dir) - act() - if self.is_eye_contact(): - self.joint_attention_achieved = True - - else: - act = self.path_to_toggle_pos(self.selected_door_pos) - act() - - # not really important as the NPC doesn't speak - if self.env.hidden_npc: - return None - - - -class HelperGrammar(object): - - templates = ["Move your", "Shake your"] - things = ["body", "head"] - - grammar_action_space = spaces.MultiDiscrete([len(templates), len(things)]) - - @classmethod - def construct_utterance(cls, action): - return cls.templates[int(action[0])] + " " + cls.things[int(action[1])] + " " - - -class HelperEnv(MultiModalMiniGridEnv): - """ - Environment in which the agent is instructed to go to a given object - named using an English text string - """ - - def __init__( - self, - size=5, - diminished_reward=True, - step_penalty=False, - knowledgeable=False, - max_steps=20, - hidden_npc=False, - ): - assert size >= 5 - self.empty_symbol = "NA \n" - self.diminished_reward = diminished_reward - self.step_penalty = step_penalty - self.knowledgeable = knowledgeable - self.hidden_npc = hidden_npc - - super().__init__( - grid_size=size, - max_steps=max_steps, - # Set this to True for maximum speed - see_through_walls=True, - actions=MiniGridEnv.Actions, - action_space=spaces.MultiDiscrete([ - len(MiniGridEnv.Actions), - *HelperGrammar.grammar_action_space.nvec - ]), - add_npc_direction=True - ) - - print({ - "size": size, - "diminished_reward": diminished_reward, - "step_penalty": step_penalty, - }) - - def _gen_grid(self, width, height): - # Create the grid - self.grid = Grid(width, height, nb_obj_dims=4) - - # Randomly vary the room width and height - width = self._rand_int(5, width+1) - height = self._rand_int(5, height+1) - - self.wall_x = width-1 - self.wall_y = height-1 - - # Generate the surrounding walls - self.grid.wall_rect(0, 0, width, height) - - # add lava - self.grid.vert_wall(width//2, 1, height - 2, Lava) - - # door top - door_color_top = self._rand_elem(COLOR_NAMES) - self.door_pos_top = (width-1, 1) - self.door_top = Door(door_color_top, is_locked=True) - self.grid.set(*self.door_pos_top, self.door_top) - - # switch top - self.switch_pos_top = (0, 1) - self.switch_top = Switch(door_color_top, lockable_object=self.door_top, locker_switch=True) - self.grid.set(*self.switch_pos_top, self.switch_top) - - # door bottom - door_color_bottom = self._rand_elem(COLOR_NAMES) - self.door_pos_bottom = (width-1, height-2) - self.door_bottom = Door(door_color_bottom, is_locked=True) - self.grid.set(*self.door_pos_bottom, self.door_bottom) - - # switch bottom - self.switch_pos_bottom = (0, height-2) - self.switch_bottom = Switch(door_color_bottom, lockable_object=self.door_bottom, locker_switch=True) - self.grid.set(*self.switch_pos_bottom, self.switch_bottom) - - # save to variables - self.switches = [self.switch_top, self.switch_bottom] - self.switches_pos = [self.switch_pos_top, self.switch_pos_bottom] - self.door = [self.door_top, self.door_bottom] - self.door_pos = [self.door_pos_top, self.door_pos_bottom] - - # Set a randomly coloured Dancer NPC - color = self._rand_elem(COLOR_NAMES) - self.peer = Peer(color, "Jill", self) - - # Place it on the middle right side of the room - peer_pos = np.array((self._rand_int(width//2+1, width - 1), self._rand_int(1, height - 1))) - - self.grid.set(*peer_pos, self.peer) - self.peer.init_pos = peer_pos - self.peer.cur_pos = peer_pos - - # Randomize the agent's start position and orientation - self.place_agent(size=(width//2, height)) - - # Generate the mission string - self.mission = 'watch dancer and repeat his moves afterwards' - - # Dummy beginning string - self.beginning_string = "This is what you hear. \n" - self.utterance = self.beginning_string - - # utterance appended at the end of each step - self.utterance_history = "" - - # used for rendering - self.conversation = self.utterance - self.outcome_info = None - - def step(self, action): - p_action = action[0] - utterance_action = action[1:] - - obs, reward, done, info = super().step(p_action) - self.peer.step() - - if np.isnan(p_action): - pass - - if p_action == self.actions.done: - done = True - - elif all(self.agent_pos == self.door_pos_top): - done = True - - elif all(self.agent_pos == self.door_pos_bottom): - done = True - - elif all([self.switch_top.is_on, self.switch_bottom.is_on]): - # if both switches are on no reward is given and episode ends - done = True - - elif all(self.peer.cur_pos == self.peer.selected_door_pos): - reward = self._reward() - done = True - - # discount - if self.step_penalty: - reward = reward - 0.01 - - if self.hidden_npc: - # all npc are hidden - assert np.argwhere(obs['image'][:,:,0] == OBJECT_TO_IDX['npc']).size == 0 - assert "{}:".format(self.peer.name) not in self.utterance - - # fill observation with text - self.append_existing_utterance_to_history() - obs = self.add_utterance_to_observation(obs) - self.reset_utterance() - - if done: - if reward > 0: - self.outcome_info = "SUCCESS: agent got {} reward \n".format(np.round(reward, 1)) - else: - self.outcome_info = "FAILURE: agent got {} reward \n".format(reward) - - return obs, reward, done, info - - def _reward(self): - if self.diminished_reward: - return super()._reward() - else: - return 1.0 - - def render(self, *args, **kwargs): - obs = super().render(*args, **kwargs) - self.window.clear_text() # erase previous text - - # self.window.set_caption(self.conversation, [self.peer.name]) - # self.window.ax.set_title("correct door: {}".format(self.true_guide.target_color), loc="left", fontsize=10) - if self.outcome_info: - color = None - if "SUCCESS" in self.outcome_info: - color = "lime" - elif "FAILURE" in self.outcome_info: - color = "red" - self.window.add_text(*(0.01, 0.85, self.outcome_info), - **{'fontsize':15, 'color':color, 'weight':"bold"}) - - self.window.show_img(obs) # re-draw image to add changes to window - return obs - - -class Helper8x8Env(HelperEnv): - def __init__(self, **kwargs): - super().__init__(size=8, max_steps=20, **kwargs) - - -class Helper6x6Env(HelperEnv): - def __init__(self): - super().__init__(size=6, max_steps=20) - - - -register( - id='MiniGrid-Helper-5x5-v0', - entry_point='gym_minigrid.envs:HelperEnv' -) - -register( - id='MiniGrid-Helper-6x6-v0', - entry_point='gym_minigrid.envs:Helper6x6Env' -) - -register( - id='MiniGrid-Helper-8x8-v0', - entry_point='gym_minigrid.envs:Helper8x8Env' -) diff --git a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/color_generator/run.py b/spaces/freddyaboulton/3.1.4.9-all-demos/demos/color_generator/run.py deleted file mode 100644 index 3b18bd0839fba3897d92660a7eb8bd79d493d2f1..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/color_generator/run.py +++ /dev/null @@ -1,63 +0,0 @@ -import gradio as gr -import cv2 -import numpy as np -import random - - -# Convert decimal color to hexadecimal color -def RGB_to_Hex(rgb): - color = "#" - for i in rgb: - num = int(i) - color += str(hex(num))[-2:].replace("x", "0").upper() - return color - - -# Randomly generate light or dark colors -def random_color(is_light=True): - return ( - random.randint(0, 127) + int(is_light) * 128, - random.randint(0, 127) + int(is_light) * 128, - random.randint(0, 127) + int(is_light) * 128, - ) - - -def switch_color(color_style): - if color_style == "light": - is_light = True - elif color_style == "dark": - is_light = False - back_color_ = random_color(is_light) # Randomly generate colors - back_color = RGB_to_Hex(back_color_) # Convert to hexadecimal - - # Draw color pictures. - w, h = 50, 50 - img = np.zeros((h, w, 3), np.uint8) - cv2.rectangle(img, (0, 0), (w, h), back_color_, thickness=-1) - - return back_color, back_color, img - - -inputs = [gr.Radio(["light", "dark"], value="light")] - -outputs = [ - gr.ColorPicker(label="color"), - gr.Textbox(label="hexadecimal color"), - gr.Image(type="numpy", label="color picture"), -] - -title = "Color Generator" -description = ( - "Click the Submit button, and a dark or light color will be randomly generated." -) - -demo = gr.Interface( - fn=switch_color, - inputs=inputs, - outputs=outputs, - title=title, - description=description, -) - -if __name__ == "__main__": - demo.launch() diff --git a/spaces/frncscp/bullerengue/musika/musika_encode.py b/spaces/frncscp/bullerengue/musika/musika_encode.py deleted file mode 100644 index 566d2b169219488258798a117c67325b518fcaf1..0000000000000000000000000000000000000000 --- a/spaces/frncscp/bullerengue/musika/musika_encode.py +++ /dev/null @@ -1,24 +0,0 @@ -import os - -os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3" - -from parse.parse_encode import parse_args -from models import Models_functions -from utils_encode import UtilsEncode_functions - -if __name__ == "__main__": - - # parse args - args = parse_args() - - # initialize networks - M = Models_functions(args) - M.download_networks() - models_ls = M.get_networks() - - # encode samples - U = UtilsEncode_functions(args) - if args.whole: - U.compress_whole_files(models_ls) - else: - U.compress_files(models_ls) diff --git a/spaces/frncscp/bullerengue/musika/parse/parse_decode.py b/spaces/frncscp/bullerengue/musika/parse/parse_decode.py deleted file mode 100644 index 8472d77c4a13d116417fb3a58469cf4a41fa8038..0000000000000000000000000000000000000000 --- a/spaces/frncscp/bullerengue/musika/parse/parse_decode.py +++ /dev/null @@ -1,210 +0,0 @@ -import argparse -from typing import Any -import tensorflow as tf - - -class EasyDict(dict): - def __getattr__(self, name: str) -> Any: - try: - return self[name] - except KeyError: - raise AttributeError(name) - - def __setattr__(self, name: str, value: Any) -> None: - self[name] = value - - def __delattr__(self, name: str) -> None: - del self[name] - - -def str2bool(v): - if isinstance(v, bool): - return v - if v.lower() in ("yes", "true", "t", "y", "1"): - return True - elif v.lower() in ("no", "false", "f", "n", "0"): - return False - else: - raise argparse.ArgumentTypeError("Boolean value expected.") - - -def params_args(args): - parser = argparse.ArgumentParser() - - parser.add_argument( - "--hop", - type=int, - default=256, - help="Hop size (window size = 4*hop)", - ) - parser.add_argument( - "--mel_bins", - type=int, - default=256, - help="Mel bins in mel-spectrograms", - ) - parser.add_argument( - "--sr", - type=int, - default=44100, - help="Sampling Rate", - ) - parser.add_argument( - "--small", - type=str2bool, - default=False, - help="If True, use model with shorter available context, useful for small datasets", - ) - parser.add_argument( - "--latdepth", - type=int, - default=64, - help="Depth of generated latent vectors", - ) - parser.add_argument( - "--coorddepth", - type=int, - default=64, - help="Dimension of latent coordinate and style random vectors", - ) - parser.add_argument( - "--max_lat_len", - type=int, - default=512, - help="Length of latent sequences: a random on-the-fly crop will be used for training", - ) - parser.add_argument( - "--base_channels", - type=int, - default=128, - help="Base channels for generator and discriminator architectures", - ) - parser.add_argument( - "--shape", - type=int, - default=128, - help="Length of spectrograms time axis", - ) - parser.add_argument( - "--window", - type=int, - default=64, - help="Generator spectrogram window (must divide shape)", - ) - parser.add_argument( - "--mu_rescale", - type=float, - default=-25.0, - help="Spectrogram mu used to normalize", - ) - parser.add_argument( - "--sigma_rescale", - type=float, - default=75.0, - help="Spectrogram sigma used to normalize", - ) - parser.add_argument( - "--files_path", - type=str, - default="audio_samples/", - help="Path of compressed latent samples to decode", - ) - parser.add_argument( - "--save_path", - type=str, - default="decoded_samples/", - help="Path where decoded audio files will be saved", - ) - parser.add_argument( - "--dec_path", - type=str, - default="checkpoints/ae", - help="Path of pretrained decoders weights", - ) - parser.add_argument( - "--load_path", - type=str, - default="None", - help="If not None, load models weights from this path", - ) - parser.add_argument( - "--base_path", - type=str, - default="checkpoints", - help="Path where pretrained models are downloaded", - ) - parser.add_argument( - "--testing", - type=str2bool, - default=True, - help="True if optimizers weight do not need to be loaded", - ) - parser.add_argument( - "--cpu", - type=str2bool, - default=False, - help="True if you wish to use cpu", - ) - parser.add_argument( - "--mixed_precision", - type=str2bool, - default=True, - help="True if your GPU supports mixed precision", - ) - - tmp_args = parser.parse_args() - - args.hop = tmp_args.hop - args.mel_bins = tmp_args.mel_bins - args.sr = tmp_args.sr - args.small = tmp_args.small - args.latdepth = tmp_args.latdepth - args.coorddepth = tmp_args.coorddepth - args.max_lat_len = tmp_args.max_lat_len - args.base_channels = tmp_args.base_channels - args.shape = tmp_args.shape - args.window = tmp_args.window - args.mu_rescale = tmp_args.mu_rescale - args.sigma_rescale = tmp_args.sigma_rescale - args.save_path = tmp_args.save_path - args.files_path = tmp_args.files_path - args.dec_path = tmp_args.dec_path - args.load_path = tmp_args.load_path - args.base_path = tmp_args.base_path - args.testing = tmp_args.testing - args.cpu = tmp_args.cpu - args.mixed_precision = tmp_args.mixed_precision - - if args.small: - args.latlen = 128 - else: - args.latlen = 256 - args.coordlen = (args.latlen // 2) * 3 - - print() - - args.datatype = tf.float32 - gpuls = tf.config.list_physical_devices("GPU") - if len(gpuls) == 0 or args.cpu: - args.cpu = True - args.mixed_precision = False - tf.config.set_visible_devices([], "GPU") - print() - print("Using CPU...") - print() - if args.mixed_precision: - args.datatype = tf.float16 - print() - print("Using GPU with mixed precision enabled...") - print() - if not args.mixed_precision and not args.cpu: - print() - print("Using GPU without mixed precision...") - print() - - return args - - -def parse_args(): - args = EasyDict() - return params_args(args) diff --git a/spaces/g4f/freegpt-webui/g4f/Provider/Providers/Phind.py b/spaces/g4f/freegpt-webui/g4f/Provider/Providers/Phind.py deleted file mode 100644 index 9fa8ec821f701d7841432e498a11ac9dd017978c..0000000000000000000000000000000000000000 --- a/spaces/g4f/freegpt-webui/g4f/Provider/Providers/Phind.py +++ /dev/null @@ -1,36 +0,0 @@ -import os -import json -import time -import subprocess - -from ...typing import sha256, Dict, get_type_hints - -url = 'https://phind.com' -model = ['gpt-4'] -supports_stream = True - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - - path = os.path.dirname(os.path.realpath(__file__)) - config = json.dumps({ - 'model': model, - 'messages': messages}, separators=(',', ':')) - - cmd = ['python', f'{path}/helpers/phind.py', config] - - p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) - - for line in iter(p.stdout.readline, b''): - if b'Just a moment...' in line: - os.system('clear' if os.name == 'posix' else 'cls') - yield 'Clouflare error, please try again...' - os._exit(0) - - else: - if b'ping - 2023-' in line: - continue - - yield line.decode('cp1251') #[:-1] - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/decode_heads/cascade_decode_head.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/decode_heads/cascade_decode_head.py deleted file mode 100644 index d02122ca0e68743b1bf7a893afae96042f23838c..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/decode_heads/cascade_decode_head.py +++ /dev/null @@ -1,57 +0,0 @@ -from abc import ABCMeta, abstractmethod - -from .decode_head import BaseDecodeHead - - -class BaseCascadeDecodeHead(BaseDecodeHead, metaclass=ABCMeta): - """Base class for cascade decode head used in - :class:`CascadeEncoderDecoder.""" - - def __init__(self, *args, **kwargs): - super(BaseCascadeDecodeHead, self).__init__(*args, **kwargs) - - @abstractmethod - def forward(self, inputs, prev_output): - """Placeholder of forward function.""" - pass - - def forward_train(self, inputs, prev_output, img_metas, gt_semantic_seg, - train_cfg): - """Forward function for training. - Args: - inputs (list[Tensor]): List of multi-level img features. - prev_output (Tensor): The output of previous decode head. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - gt_semantic_seg (Tensor): Semantic segmentation masks - used if the architecture supports semantic segmentation task. - train_cfg (dict): The training config. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - seg_logits = self.forward(inputs, prev_output) - losses = self.losses(seg_logits, gt_semantic_seg) - - return losses - - def forward_test(self, inputs, prev_output, img_metas, test_cfg): - """Forward function for testing. - - Args: - inputs (list[Tensor]): List of multi-level img features. - prev_output (Tensor): The output of previous decode head. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - test_cfg (dict): The testing config. - - Returns: - Tensor: Output segmentation map. - """ - return self.forward(inputs, prev_output) diff --git a/spaces/gordonchan/h2oo/enums.py b/spaces/gordonchan/h2oo/enums.py deleted file mode 100644 index 2041b8c24f3bbb7bf0e368ebbdbc482adbb4da80..0000000000000000000000000000000000000000 --- a/spaces/gordonchan/h2oo/enums.py +++ /dev/null @@ -1,120 +0,0 @@ -from enum import Enum - - -class PromptType(Enum): - custom = -1 - plain = 0 - instruct = 1 - quality = 2 - human_bot = 3 - dai_faq = 4 - summarize = 5 - simple_instruct = 6 - instruct_vicuna = 7 - instruct_with_end = 8 - human_bot_orig = 9 - prompt_answer = 10 - open_assistant = 11 - wizard_lm = 12 - wizard_mega = 13 - instruct_vicuna2 = 14 - instruct_vicuna3 = 15 - wizard2 = 16 - wizard3 = 17 - instruct_simple = 18 - wizard_vicuna = 19 - openai = 20 - openai_chat = 21 - gptj = 22 - prompt_answer_openllama = 23 - vicuna11 = 24 - mptinstruct = 25 - mptchat = 26 - falcon = 27 - guanaco = 28 - llama2 = 29 - - -class DocumentSubset(Enum): - Relevant = 0 - RelSources = 1 - TopKSources = 2 - - -non_query_commands = [ - DocumentSubset.RelSources.name, - DocumentSubset.TopKSources.name -] - - -class DocumentChoice(Enum): - ALL = 'All' - - -class LangChainMode(Enum): - """LangChain mode""" - - DISABLED = "Disabled" - LLM = "LLM" - ALL = "All" - WIKI = "wiki" - WIKI_FULL = "wiki_full" - USER_DATA = "UserData" - MY_DATA = "MyData" - GITHUB_H2OGPT = "github h2oGPT" - H2O_DAI_DOCS = "DriverlessAI docs" - - -# modes should not be removed from visible list or added by name -langchain_modes_intrinsic = [LangChainMode.DISABLED.value, - LangChainMode.LLM.value, - LangChainMode.MY_DATA.value] - - -class LangChainAction(Enum): - """LangChain action""" - - QUERY = "Query" - # WIP: - # SUMMARIZE_MAP = "Summarize_map_reduce" - SUMMARIZE_MAP = "Summarize" - SUMMARIZE_ALL = "Summarize_all" - SUMMARIZE_REFINE = "Summarize_refine" - - -class LangChainAgent(Enum): - """LangChain agents""" - - SEARCH = "Search" - # CSV = "csv" # WIP - - -no_server_str = no_lora_str = no_model_str = '[None/Remove]' - -# from site-packages/langchain/llms/openai.py -# but needed since ChatOpenAI doesn't have this information -model_token_mapping = { - "gpt-4": 8192, - "gpt-4-0314": 8192, - "gpt-4-32k": 32768, - "gpt-4-32k-0314": 32768, - "gpt-3.5-turbo": 4096, - "gpt-3.5-turbo-16k": 16 * 1024, - "gpt-3.5-turbo-0301": 4096, - "text-ada-001": 2049, - "ada": 2049, - "text-babbage-001": 2040, - "babbage": 2049, - "text-curie-001": 2049, - "curie": 2049, - "davinci": 2049, - "text-davinci-003": 4097, - "text-davinci-002": 4097, - "code-davinci-002": 8001, - "code-davinci-001": 8001, - "code-cushman-002": 2048, - "code-cushman-001": 2048, -} - -source_prefix = "Sources [Score | Link]:" -source_postfix = "End Sources

              " diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Fantastic Beasts And Where To Find Them English Book In Telugu Download Movies A Guide for Harry Potter Fans.md b/spaces/gotiQspiryo/whisper-ui/examples/Fantastic Beasts And Where To Find Them English Book In Telugu Download Movies A Guide for Harry Potter Fans.md deleted file mode 100644 index df317c733d856171642e2b44d07989c0abb8ce14..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Fantastic Beasts And Where To Find Them English Book In Telugu Download Movies A Guide for Harry Potter Fans.md +++ /dev/null @@ -1,6 +0,0 @@ - -

              Internet Archive is a digital library with a large collection of free movies, books, audio files, images, etc. You can also find movie audio tracks in this website or download movies in MP3 format. To find the movie audio track, type the movie title in the search field, open it and click MP3 under the DOWNLOAD OPTIONS.

              -

              Fantastic Beasts And Where To Find Them English Book In Telugu Download Movies


              Download Zip ○○○ https://urlgoal.com/2uyNcr



              -

              The HP movie series are an adaptation of the fantasy novels by the same name written by J.K. Rowling, a British author. The Harry Potter books and movies revolve around the main character Harry Potter (played by the actor Daniel Radcliffe), a young orphaned boy, who finds out that he is a wizard on his eleventh birthday. After which, he is admitted to the Hogwarts School of Witchcraft and Wizardry, where he unveils the truth about his parents' death. He then, along with his friends, sets on an adventure full of mysteries, suspense, friendships, games, family, and so much more. While at it, they plan to ultimately defeat Lord Voldemort, who is shown as the most powerful and undefeatable dark wizard in ages.

              aaccfb2cb3
              -
              -
              \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Fresco Logic Usb 3.0 Driver For Mac Palyginkite skirtingus USB 3.0 VGAHDMI lustus ir j privalumus.md b/spaces/gotiQspiryo/whisper-ui/examples/Fresco Logic Usb 3.0 Driver For Mac Palyginkite skirtingus USB 3.0 VGAHDMI lustus ir j privalumus.md deleted file mode 100644 index f518341cfbd457b334932657577b0307a9758fac..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Fresco Logic Usb 3.0 Driver For Mac Palyginkite skirtingus USB 3.0 VGAHDMI lustus ir j privalumus.md +++ /dev/null @@ -1,35 +0,0 @@ -
              -

              Device Manager is a tool found in all Windows computers to help users download and install updated drivers. Below is how to use it to download and install the update for Windows 11/10 Fresco Logic USB VGA display driver.

              -

              Fresco Logic Usb 3.0 Driver For Mac


              DOWNLOAD ——— https://urlgoal.com/2uyN6r



              -

              Above we elucidated all the manual methods to download the Fresco Logic USB VGA display driver for Windows 10/11. These manual ways are complicated and a bit time-consuming. Hence, to save you precious time and effort, we recommend downloading and installing the updated drivers via a program like Bit Driver Updater.

              -

              After you are done downloading and installing Bit Driver Updater, wait for two to three seconds to get a list of problematic drivers. Once you know which drivers are outdated, you may Update All these outdated drivers with a single click on the designated button to do it.

              -

              The above was all about downloading and installing the updated Fresco Logic USB display driver for Windows 11/10. Now, you might be interested in knowing what to do if your USB display driver is not working for any reason. Well, as a bonus for our readers, we talk about the same in the following section of this article.

              -

              The driver might not be installed correctly if you tried to do the installation manually. Hence, you may try reinstalling the Fresco Logic USB display driver if it is not working. Below is the process to do it.

              -

              Outdated drivers are among the most prominent causes of almost all problems like the Fresco Logic USB VGA display driver not working. Hence, you may follow the above guide to download the driver update and install it.

              -

              -

              This article communicated the best possible methods to download and install the updated Fresco Logic USB display driver for Windows 11/10. Moreover, we also discussed how to fix the issues if the driver is not working.

              -

              When I rebooted my mac, it hung up on the boot page with the apple logo. The boot process hangs even though the loading bar is completely filled. I can't start my computer now so I'm trying to delete the driver via recovery mode. I think it is one of the kext files in Library/Extensions, but there are so many and none of them say frescologic.

              -

              My macbook pro is a 2015 model with Mac OS Mojave installed. Unfortunately I don't have a filevault backup so uninstalling the driver is my only hope of recovering my work. Is there an easy way to identify this kernel extension and am I even looking in the right place?

              -

              I've been looking everywhere for a driver or chipset info for the unbranded "Mini HD USB 3.0 HDMI Adapter" for years. I finally dug though enough duck duck go results to find a page that claims it uses the Fresco logic USB display driver, which brought me here.

              -

              This driver is tested on Ubuntu 14 LTS as well as some Android platforms with kernel version 3.10.x. This driver source might not compile on newer kernels (eg. 4.0 or above) because of the fast-moving API changes in the mainstream kernel. You might need to adapt it for your own use.

              -

              thank!! happened))
              communication with you helped me remember another way)
              Device manager > errror fresco > Update driver > Browse my computer > Let me pick from a list of device driver> Have Disk...> mu fresco driver))

              -

              (c) Download, decompress the Fresco Logic registry fix. Insert the fix into your registry, by double clicking it on windows. This registry fix only inserts a key used by the FL driver to disable the U1/U2 power states I think, but you are using this at your own risk.

              -

              I started trying out various Fresco Logic drivers I could get my hands on. I have tried, without success, versions: 3.5.93.0, 3.5.88.0, 3.5.73.0, 3.5.46.0, 3.5.30.0, 3.5.24, 3.5.2.0. Version 3.5.24 seemed to be working, but when I pushed the drive by transferring lots of data simultaneously, the device disconnected.

              -

              I have contacted AsRock Support (yet again) and these guys have been great in sending me driver v3.0.100.58 which actually works!! To save the day for anyone with the same problem I have made a backup of this working driver!

              -

              I bet you tried hard and I know how it feels; I got similar treatment at the time. Fresco Logic (the controller manufacturer of my computer) just ignored my very specific requests for support; probably they knew already the problem existed with their drivers; not sure.

              -

              Back to your problem. Given that you tried the WD on other laptops and it works without problems that does point the finger to the Asus laptop. However, do double-check that the different laptops did have the same OS (e.g. win 10) with your Asus, so you can have a fair field to compare, at least as far as the OS goes. Beyond that, you could try to bypass the Asus driver and install the Intel USB drivers from Intel. Maybe you can work with this: -USB-3-0-eXtensible-Host-Controller-Driver, otherwise try to locate which drivers are compatible with that of your laptop. See if you can replace the Asus-supplied driver and helps solve your problem (at your own risk of course :) )

              -

              Thank you for sharing your extra efforts on this problem. I had the exact same problem with a WD My Passport 1TB USB 3.0 Portable hard drive and my Fresco FL1000 USB3.0 driver on Windows. It is working perfectly now!

              -

              You saved my day, or rather my evening. Have the exact same wd passport as you and had to play detective for about an hour to conclude it must be a problem with the usb 3.0, Fresco, just as yours. And your driver WORKS. Excellent ,and thank you.

              -

              Workaround (a) I think implied that you will probably use a well tested, quality USB 3.0 controller coupled with good drivers, so you are indeed changing hardware configuration and there are plenty of those that we know that the drive works. As for the Y-cable, I think it is a good solution for laptops that are known to have issues with power on their usb ports, but for desktops with on-board usb3.0 ports, that should not be an issue, although for some people that can be the case. Not for me.

              -

              Drivers installed are the official ones provided by Asus :
              _Fresco_Win8_64_Z35730.zip

              As you can see, drivers are provided with a patch, but I'm not sure if it's really installed, the batch file doesn't seem to return anything.

              Any help will be appreciated :)

              PS: I think that guy pointed out something related to my issue :
              -US/a3748df9-18bf-48b7-a834-a99c9de84e3b/2-problems-after-updating-to-windows-81?forum=w8itprohardware&prof=required

              But I'm not sure what I can do with this.

              -

              Thank you for your reply,

              Here's Asus support answer :

              Dear sir,

              Thank you for your e-mail.
              In this case i would advise you to make use of the "Go Back function" in windows 10.
              Since your system has been delivered with Windows 7, we do not provide this system with drivers for Windows 10 yet.
              Probably Windows updates will provide this device with drivers through windows updates in the future.
              We would advise you to go back to your previous Windows OS till your system will be compatible with Windows 10.

              A/V is uninstalled and SFC said it fixed some things but issue is still there.

              Kind regards

              -

              The driver roll-back that I mentioned in my previous post was the solution to the problem. Crazy that I had to roll back to a driver that was already outdated when my computer was released but that is what fixed the problem. Anyone having similar problems should check drivers and consider rolling back to the earliest available driver they can find. Test it then update one version at a time until it stops working. Go back to the last version and leave it at that.

              -

              The fix for me has been to update the Fresco Logic xHCI (USB3) Controller driver to version 3.5.30.0 - but this is not the newest version, and as kyroguy pointed out, newer versions of the driver caused the same problem. So it seems there is a narrow range of Fresco Logic driver versions that properly support this drive - older versions and newer versions of the driver do not work.

              -

              It sounds like you may be experiencing a device-specific issue associated with power management. Power management has been enabled more aggressively in these recent drivers, which would explain why you are seeing this issue with our most up-to-date software.

              -

              To update your USB drivers in Windows 10, go to Settings > Update & Security > Windows Update, then click Check for Updates. Windows will search for available updates, including driver updates. Alternatively, navigate to Device Manager and click Universal Serial Bus Controllers. Right-click the device you're having an issue with and select Update Driver.

              -

              To reinstall a USB driver, navigate to Device Manager, right-click the name of the device you're having an issue with, and select Uninstall. Restart your PC, and Windows will automatically reinstall the driver.

              -

              To uninstall USB drivers, navigate to Device Manager, click the View menu, and enable Show Hidden Devices. Find the type of device you're dealing with, then expand the menu, right-click your device, and select Uninstall. In the confirmation dialog, click Delete the driver software for this device > OK.

              -

              To uninstall the Fresco Logic USB display driver, you can go to the Add/Remove Program feature in Windows Control Panel. In Windows Vista/7/8/10, click the Add or Remove Programs tab and select Uninstall a program. On Windows XP, click the Add or Remove Programs tab and select Change/Remove. The removal process will begin, and a progress bar will display how long it will take. This driver runs on Windows OS releases and PC manufacturers install it on their systems.

              -

              First, uninstall the Fresco Logic VGA Display Driver from your computer by following the instructions provided by the computer manufacturer. In most cases, the driver is located in C: Program Files (x86) and can be easily removed with the Control Panel or by using CMD on your Mac. Alternatively, you can also manually uninstall the Fresco logic USB display driver from your computer by launching the command prompt in your operating system.

              aaccfb2cb3
              -
              -
              \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Mission Impossible 1988 Season 1 DVDRip XviD-48 ((TOP)).md b/spaces/gotiQspiryo/whisper-ui/examples/Mission Impossible 1988 Season 1 DVDRip XviD-48 ((TOP)).md deleted file mode 100644 index 858dc119afe09bcf8a18716a7d153e335d0e8f7c..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Mission Impossible 1988 Season 1 DVDRip XviD-48 ((TOP)).md +++ /dev/null @@ -1,18 +0,0 @@ -

              Mission Impossible 1988 Season 1 DVDRip XviD-48


              Download File 🗹 https://urlgoal.com/2uyM8f



              - -This is a torrent. - -Download Mission Impossible 1988 Season 1 DVDRip XviD-48 torrent. Cloud. The hero team's mission is a success when: the assassination of a world political leader triggers a terrorist attack; an airplane is hijacked, causing a crash on a Los Angeles freeway; a stolen car causes a train wreck; several nuclear scientists are kidnapped; and a plan to blow up a bus causes it to be flattened on a road. Title: Mission Impossible 1988 Season 1 DVDRip XviD-48. Runtime: 89 min. An imaginary future. - -A future in which America and other countries are defenseless against a covert, highly technological group of scientists who, under a code name, eliminate political leaders and attempt to control all governments. File Name Mission Impossible 1988 Season 1 DVDRip XviD-48 (Total 48). Watch online this video in HD quality 1080p on horsereban.Must be some kind of habit with you I find myself falling for you. - -MEMORIES, memories, memories - -All the memories that I have are of you. In each and every one of them. Every evening the need for you to come and tell me every little detail about every memory you have of us. I wake up and I want to see your lips on mine. My hands on your body. My lips against your skin. My breath going in and out of your mouth. Kissing you all over your body, feeling you shake with the first touch of my lips. Feel the excitement in you. Feel the need to melt into you. All the memories. The feelings and desires that we had. - -The desire for you is still there. It doesn’t get away from me. Not ever. I can still feel it pulling my body into yours. I can still feel your hands on my skin. I can still feel my body pressed up against yours. I still want to feel your cock inside me. I still want to feel your cock in me, feeling my body pressed up against yours. I still want to feel your lips on my neck. I still want to feel your mouth on mine. Your tongue in my mouth. My lips against yours. My body against yours. My arms around you. My hands on your body. - -You have lost none of the power you once had over me. The need for you is still here. It doesn’t get less and it won’t get away from me. 4fefd39f24
              -
              -
              -

              diff --git a/spaces/gradio/HuBERT/fairseq/models/hubert/hubert.py b/spaces/gradio/HuBERT/fairseq/models/hubert/hubert.py deleted file mode 100644 index 232a5e402a146023e5c93f3c2574ecec98faf9d5..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/models/hubert/hubert.py +++ /dev/null @@ -1,563 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from typing import Dict, List, Optional, Tuple - -import numpy as np - -import torch -import torch.nn as nn -from dataclasses import dataclass, field -from fairseq import utils -from fairseq.data.data_utils import compute_mask_indices -from fairseq.data.dictionary import Dictionary -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.models import BaseFairseqModel, register_model -from fairseq.models.wav2vec.wav2vec2 import ( - ConvFeatureExtractionModel, - TransformerEncoder, -) -from fairseq.modules import GradMultiply, LayerNorm -from fairseq.tasks.hubert_pretraining import ( - HubertPretrainingConfig, - HubertPretrainingTask, -) -from omegaconf import II - -logger = logging.getLogger(__name__) - -EXTRACTOR_MODE_CHOICES = ChoiceEnum(["default", "layer_norm"]) -MASKING_DISTRIBUTION_CHOICES = ChoiceEnum( - ["static", "uniform", "normal", "poisson"] -) - - -@dataclass -class HubertConfig(FairseqDataclass): - label_rate: int = II("task.label_rate") - - extractor_mode: EXTRACTOR_MODE_CHOICES = field( - default="default", - metadata={ - "help": "mode for feature extractor. default has a single group " - "norm with d groups in the first conv block, whereas layer_norm " - "has layer norms in every block (meant to use with normalize=True)" - }, - ) - encoder_layers: int = field( - default=12, metadata={"help": "num encoder layers in the transformer"} - ) - encoder_embed_dim: int = field( - default=768, metadata={"help": "encoder embedding dimension"} - ) - encoder_ffn_embed_dim: int = field( - default=3072, metadata={"help": "encoder embedding dimension for FFN"} - ) - encoder_attention_heads: int = field( - default=12, metadata={"help": "num encoder attention heads"} - ) - activation_fn: ChoiceEnum(utils.get_available_activation_fns()) = field( - default="gelu", metadata={"help": "activation function to use"} - ) - - # dropouts - dropout: float = field( - default=0.1, - metadata={"help": "dropout probability for the transformer"}, - ) - attention_dropout: float = field( - default=0.1, - metadata={"help": "dropout probability for attention weights"}, - ) - activation_dropout: float = field( - default=0.0, - metadata={"help": "dropout probability after activation in FFN"}, - ) - encoder_layerdrop: float = field( - default=0.0, - metadata={"help": "probability of dropping a tarnsformer layer"}, - ) - dropout_input: float = field( - default=0.0, - metadata={"help": "dropout to apply to the input (after feat extr)"}, - ) - dropout_features: float = field( - default=0.0, - metadata={ - "help": "dropout to apply to the features (after feat extr)" - }, - ) - - final_dim: int = field( - default=0, - metadata={ - "help": "project final representations and targets to this many " - "dimensions. set to encoder_embed_dim is <= 0" - }, - ) - untie_final_proj: bool = field( - default=False, - metadata={"help": "use separate projection for each target"}, - ) - layer_norm_first: bool = field( - default=False, - metadata={"help": "apply layernorm first in the transformer"}, - ) - conv_feature_layers: str = field( - default="[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2", - metadata={ - "help": "string describing convolutional feature extraction " - "layers in form of a python list that contains " - "[(dim, kernel_size, stride), ...]" - }, - ) - conv_bias: bool = field( - default=False, metadata={"help": "include bias in conv encoder"} - ) - logit_temp: float = field( - default=0.1, metadata={"help": "temperature to divide logits by"} - ) - target_glu: bool = field( - default=False, metadata={"help": "adds projection + glu to targets"} - ) - feature_grad_mult: float = field( - default=1.0, - metadata={"help": "multiply feature extractor var grads by this"}, - ) - - # masking - mask_length: int = field(default=10, metadata={"help": "mask length"}) - mask_prob: float = field( - default=0.65, - metadata={"help": "probability of replacing a token with mask"}, - ) - mask_selection: MASKING_DISTRIBUTION_CHOICES = field( - default="static", metadata={"help": "how to choose mask length"} - ) - mask_other: float = field( - default=0, - metadata={ - "help": "secondary mask argument " - "(used for more complex distributions), " - "see help in compute_mask_indicesh" - }, - ) - no_mask_overlap: bool = field( - default=False, metadata={"help": "whether to allow masks to overlap"} - ) - mask_min_space: int = field( - default=1, - metadata={ - "help": "min space between spans (if no overlap is enabled)" - }, - ) - - # channel masking - mask_channel_length: int = field( - default=10, - metadata={"help": "length of the mask for features (channels)"}, - ) - mask_channel_prob: float = field( - default=0.0, - metadata={"help": "probability of replacing a feature with 0"}, - ) - mask_channel_selection: MASKING_DISTRIBUTION_CHOICES = field( - default="static", - metadata={"help": "how to choose mask length for channel masking"}, - ) - mask_channel_other: float = field( - default=0, - metadata={ - "help": "secondary mask argument " - "(used for more complex distributions), " - "see help in compute_mask_indicesh" - }, - ) - no_mask_channel_overlap: bool = field( - default=False, - metadata={"help": "whether to allow channel masks to overlap"}, - ) - mask_channel_min_space: int = field( - default=1, - metadata={ - "help": "min space between spans (if no overlap is enabled)" - }, - ) - - # positional embeddings - conv_pos: int = field( - default=128, - metadata={ - "help": "number of filters for convolutional positional embeddings" - }, - ) - conv_pos_groups: int = field( - default=16, - metadata={ - "help": "number of groups for convolutional positional embedding" - }, - ) - - latent_temp: Tuple[float, float, float] = field( - default=(2, 0.5, 0.999995), - metadata={"help": "legacy (to be removed)"}, - ) - - # loss computation - skip_masked: bool = field( - default=False, - metadata={"help": "skip computing losses over masked frames"}, - ) - skip_nomask: bool = field( - default=False, - metadata={"help": "skip computing losses over unmasked frames"}, - ) - - -@register_model("hubert", dataclass=HubertConfig) -class HubertModel(BaseFairseqModel): - def __init__( - self, - cfg: HubertConfig, - task_cfg: HubertPretrainingConfig, - dictionaries: List[Dictionary], - ) -> None: - super().__init__() - logger.info(f"HubertModel Config: {cfg}") - - feature_enc_layers = eval(cfg.conv_feature_layers) # noqa - self.embed = feature_enc_layers[-1][0] - - self.feature_extractor = ConvFeatureExtractionModel( - conv_layers=feature_enc_layers, - dropout=0.0, - mode=cfg.extractor_mode, - conv_bias=cfg.conv_bias, - ) - feature_ds_rate = np.prod([s for _, _, s in feature_enc_layers]) - self.feat2tar_ratio = ( - cfg.label_rate * feature_ds_rate / task_cfg.sample_rate - ) - - self.post_extract_proj = ( - nn.Linear(self.embed, cfg.encoder_embed_dim) - if self.embed != cfg.encoder_embed_dim - else None - ) - - self.mask_prob = cfg.mask_prob - self.mask_selection = cfg.mask_selection - self.mask_other = cfg.mask_other - self.mask_length = cfg.mask_length - self.no_mask_overlap = cfg.no_mask_overlap - self.mask_min_space = cfg.mask_min_space - - self.mask_channel_prob = cfg.mask_channel_prob - self.mask_channel_selection = cfg.mask_channel_selection - self.mask_channel_other = cfg.mask_channel_other - self.mask_channel_length = cfg.mask_channel_length - self.no_mask_channel_overlap = cfg.no_mask_channel_overlap - self.mask_channel_min_space = cfg.mask_channel_min_space - - self.dropout_input = nn.Dropout(cfg.dropout_input) - self.dropout_features = nn.Dropout(cfg.dropout_features) - - self.feature_grad_mult = cfg.feature_grad_mult - self.logit_temp = cfg.logit_temp - self.skip_masked = cfg.skip_masked - self.skip_nomask = cfg.skip_nomask - - final_dim = ( - cfg.final_dim if cfg.final_dim > 0 else cfg.encoder_embed_dim - ) - - self.mask_emb = nn.Parameter( - torch.FloatTensor(cfg.encoder_embed_dim).uniform_() - ) - - self.encoder = TransformerEncoder(cfg) - self.layer_norm = LayerNorm(self.embed) - - self.target_glu = None - if cfg.target_glu: - self.target_glu = nn.Sequential( - nn.Linear(final_dim, final_dim * 2), nn.GLU() - ) - - self.untie_final_proj = cfg.untie_final_proj - if self.untie_final_proj: - self.final_proj = nn.Linear( - cfg.encoder_embed_dim, final_dim * len(dictionaries) - ) - else: - self.final_proj = nn.Linear(cfg.encoder_embed_dim, final_dim) - - # modules below are not needed during fine-tuning - if any([d is None for d in dictionaries]): - logger.info( - "cannot find dictionary. assume will be used for fine-tuning" - ) - else: - self.num_classes = [len(d) for d in dictionaries] - self.label_embs_concat = nn.Parameter( - torch.FloatTensor(sum(self.num_classes), final_dim) - ) - nn.init.uniform_(self.label_embs_concat) - - def upgrade_state_dict_named(self, state_dict, name): - """Upgrade a (possibly old) state dict for new versions of fairseq.""" - - super().upgrade_state_dict_named(state_dict, name) - return state_dict - - @classmethod - def build_model(cls, cfg: HubertConfig, task: HubertPretrainingTask): - """Build a new model instance.""" - - model = HubertModel(cfg, task.cfg, task.dictionaries) - return model - - def apply_mask(self, x, padding_mask, target_list): - B, T, C = x.shape - if self.mask_prob > 0: - mask_indices = compute_mask_indices( - (B, T), - padding_mask, - self.mask_prob, - self.mask_length, - self.mask_selection, - self.mask_other, - min_masks=2, - no_overlap=self.no_mask_overlap, - min_space=self.mask_min_space, - ) - mask_indices = torch.from_numpy(mask_indices).to(x.device) - x[mask_indices] = self.mask_emb - else: - mask_indices = None - - if self.mask_channel_prob > 0: - mask_channel_indices = compute_mask_indices( - (B, C), - None, - self.mask_channel_prob, - self.mask_channel_length, - self.mask_channel_selection, - self.mask_channel_other, - no_overlap=self.no_mask_channel_overlap, - min_space=self.mask_channel_min_space, - ) - mask_channel_indices = ( - torch.from_numpy(mask_channel_indices) - .to(x.device) - .unsqueeze(1) - .expand(-1, T, -1) - ) - x[mask_channel_indices] = 0 - - return x, mask_indices - - def compute_nce(self, x, pos, negs): - neg_is_pos = (pos == negs).all(-1) - pos = pos.unsqueeze(0) - targets = torch.cat([pos, negs], dim=0) - - logits = torch.cosine_similarity( - x.float(), targets.float(), dim=-1 - ).type_as(x) - logits /= self.logit_temp - if neg_is_pos.any(): - logits[1:][neg_is_pos] = float("-inf") - logits = logits.transpose(0, 1) # (num_x, num_cls+1) - return logits - - def forward_features(self, source: torch.Tensor) -> torch.Tensor: - if self.feature_grad_mult > 0: - features = self.feature_extractor(source) - if self.feature_grad_mult != 1.0: - features = GradMultiply.apply(features, self.feature_grad_mult) - else: - with torch.no_grad(): - features = self.feature_extractor(source) - return features - - def forward_targets( - self, features: torch.Tensor, target_list: List[torch.Tensor], - ) -> Tuple[torch.Tensor, torch.Tensor]: - # Trim features to ensure labels exist and then get aligned labels - feat_tsz = features.size(2) - targ_tsz = min([t.size(1) for t in target_list]) - if self.feat2tar_ratio * feat_tsz > targ_tsz: - feat_tsz = int(targ_tsz / self.feat2tar_ratio) - features = features[..., :feat_tsz] - target_inds = torch.arange(feat_tsz).float() * self.feat2tar_ratio - target_list = [t[:, target_inds.long()] for t in target_list] - return features, target_list - - def forward_padding_mask( - self, features: torch.Tensor, padding_mask: torch.Tensor, - ) -> torch.Tensor: - extra = padding_mask.size(1) % features.size(1) - if extra > 0: - padding_mask = padding_mask[:, :-extra] - padding_mask = padding_mask.view( - padding_mask.size(0), features.size(1), -1 - ) - padding_mask = padding_mask.all(-1) - return padding_mask - - def forward( - self, - source: torch.Tensor, - target_list: Optional[List[torch.Tensor]] = None, - padding_mask: Optional[torch.Tensor] = None, - mask: bool = True, - features_only: bool = False, - output_layer: Optional[int] = None, - ) -> Dict[str, torch.Tensor]: - """output layer is 1-based""" - features = self.forward_features(source) - if target_list is not None: - features, target_list = self.forward_targets(features, target_list) - - features_pen = features.float().pow(2).mean() - - features = features.transpose(1, 2) - features = self.layer_norm(features) - unmasked_features = features.clone() - - if padding_mask is not None: - padding_mask = self.forward_padding_mask(features, padding_mask) - - if self.post_extract_proj is not None: - features = self.post_extract_proj(features) - - features = self.dropout_input(features) - unmasked_features = self.dropout_features(unmasked_features) - - if mask: - x, mask_indices = self.apply_mask( - features, padding_mask, target_list - ) - else: - x = features - mask_indices = None - - # feature: (B, T, D), float - # target: (B, T), long - # x: (B, T, D), float - # padding_mask: (B, T), bool - # mask_indices: (B, T), bool - x, _ = self.encoder( - x, - padding_mask=padding_mask, - layer=None if output_layer is None else output_layer - 1 - ) - - if features_only: - return {"x": x, "padding_mask": padding_mask, "features": features} - - def compute_pred(proj_x, target, label_embs): - # compute logits for the i-th label set - y = torch.index_select(label_embs, 0, target.long()) - negs = label_embs.unsqueeze(1).expand(-1, proj_x.size(0), -1) - if self.target_glu: - y = self.target_glu(y) - negs = self.target_glu(negs) - # proj_x: (S, D) - # y: (S, D) - # negs: (Neg, S, D) - return self.compute_nce(proj_x, y, negs) - - label_embs_list = self.label_embs_concat.split(self.num_classes, 0) - - if not self.skip_masked: - masked_indices = torch.logical_and(~padding_mask, mask_indices) - proj_x_m = self.final_proj(x[masked_indices]) - if self.untie_final_proj: - proj_x_m_list = proj_x_m.chunk(len(target_list), dim=-1) - else: - proj_x_m_list = [proj_x_m for _ in range(len(target_list))] - logit_m_list = [ - compute_pred(proj_x_m, t[masked_indices], label_embs_list[i]) - for i, (proj_x_m, t) in enumerate( - zip(proj_x_m_list, target_list) - ) - ] - else: - logit_m_list = [None for _ in target_list] - - if not self.skip_nomask: - nomask_indices = torch.logical_and(~padding_mask, ~mask_indices) - proj_x_u = self.final_proj(x[nomask_indices]) - if self.untie_final_proj: - proj_x_u_list = proj_x_u.chunk(len(target_list), dim=-1) - else: - proj_x_u_list = [proj_x_u for _ in range(len(target_list))] - - logit_u_list = [ - compute_pred(proj_x_u, t[nomask_indices], label_embs_list[i]) - for i, (proj_x_u, t) in enumerate( - zip(proj_x_u_list, target_list) - ) - ] - else: - logit_u_list = [None for _ in target_list] - - result = { - "logit_m_list": logit_m_list, - "logit_u_list": logit_u_list, - "padding_mask": padding_mask, - "features_pen": features_pen, - } - return result - - def extract_features( - self, - source: torch.Tensor, - padding_mask: Optional[torch.Tensor] = None, - mask: bool = False, - ret_conv: bool = False, - output_layer: Optional[int] = None, - ) -> Tuple[torch.Tensor, torch.Tensor]: - res = self.forward( - source, - padding_mask=padding_mask, - mask=mask, - features_only=True, - output_layer=output_layer, - ) - feature = res["features"] if ret_conv else res["x"] - return feature, res["padding_mask"] - - def get_logits(self, net_output, is_masked=True): - if is_masked: - logits_list = net_output["logit_m_list"] - else: - logits_list = net_output["logit_u_list"] - logits_list = [x.float() for x in logits_list if x is not None] - return logits_list - - def get_targets(self, net_output, is_masked=True): - logits_list = self.get_logits(net_output, is_masked) - targets_list = [ - x.new_zeros(x.size(0), dtype=torch.long) for x in logits_list - ] - return targets_list - - def get_extra_losses(self, net_output): - extra_losses = [] - names = [] - - if "features_pen" in net_output: - extra_losses.append(net_output["features_pen"]) - names.append("features_pen") - - return extra_losses, names - - def remove_pretraining_modules(self): - self.target_glu = None - self.final_proj = None diff --git a/spaces/gradio/HuBERT/fairseq/modules/dynamicconv_layer/setup.py b/spaces/gradio/HuBERT/fairseq/modules/dynamicconv_layer/setup.py deleted file mode 100644 index 6a21f7e2ee0840a3b251522275a0b32a856951d7..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/modules/dynamicconv_layer/setup.py +++ /dev/null @@ -1,23 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from setuptools import setup -from torch.utils.cpp_extension import BuildExtension, CUDAExtension - - -setup( - name="dynamicconv_layer", - ext_modules=[ - CUDAExtension( - name="dynamicconv_cuda", - sources=[ - "dynamicconv_cuda.cpp", - "dynamicconv_cuda_kernel.cu", - ], - ), - ], - cmdclass={"build_ext": BuildExtension}, -) diff --git a/spaces/gradio/musical_instrument_identification/run.py b/spaces/gradio/musical_instrument_identification/run.py deleted file mode 100644 index 94d60c7d61ce02fa4bfb4c3bb9cd54a1bdb99e8c..0000000000000000000000000000000000000000 --- a/spaces/gradio/musical_instrument_identification/run.py +++ /dev/null @@ -1,51 +0,0 @@ -import gradio as gr -import torch -import torchaudio -from timeit import default_timer as timer -from data_setups import audio_preprocess, resample -import gdown - -url = 'https://drive.google.com/uc?id=1X5CR18u0I-ZOi_8P0cNptCe5JGk9Ro0C' -output = 'piano.wav' -gdown.download(url, output, quiet=False) -url = 'https://drive.google.com/uc?id=1W-8HwmGR5SiyDbUcGAZYYDKdCIst07__' -output= 'torch_efficientnet_fold2_CNN.pth' -gdown.download(url, output, quiet=False) -device = "cuda" if torch.cuda.is_available() else "cpu" -SAMPLE_RATE = 44100 -AUDIO_LEN = 2.90 -model = torch.load("torch_efficientnet_fold2_CNN.pth", map_location=torch.device('cpu')) -LABELS = [ - "Cello", "Clarinet", "Flute", "Acoustic Guitar", "Electric Guitar", "Organ", "Piano", "Saxophone", "Trumpet", "Violin", "Voice" -] -example_list = [ - ["piano.wav"] -] - - -def predict(audio_path): - start_time = timer() - wavform, sample_rate = torchaudio.load(audio_path) - wav = resample(wavform, sample_rate, SAMPLE_RATE) - if len(wav) > int(AUDIO_LEN * SAMPLE_RATE): - wav = wav[:int(AUDIO_LEN * SAMPLE_RATE)] - else: - print(f"input length {len(wav)} too small!, need over {int(AUDIO_LEN * SAMPLE_RATE)}") - return - img = audio_preprocess(wav, SAMPLE_RATE).unsqueeze(0) - model.eval() - with torch.inference_mode(): - pred_probs = torch.softmax(model(img), dim=1) - pred_labels_and_probs = {LABELS[i]: float(pred_probs[0][i]) for i in range(len(LABELS))} - pred_time = round(timer() - start_time, 5) - return pred_labels_and_probs, pred_time - -demo = gr.Interface(fn=predict, - inputs=gr.Audio(type="filepath"), - outputs=[gr.Label(num_top_classes=11, label="Predictions"), - gr.Number(label="Prediction time (s)")], - examples=example_list, - cache_examples=False - ) - -demo.launch(debug=False) diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/crop_images.py b/spaces/gwang-kim/DATID-3D/pose_estimation/crop_images.py deleted file mode 100644 index 92c9fab2823c3fdda8cc637fd891ff1f03d57d4a..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/pose_estimation/crop_images.py +++ /dev/null @@ -1,132 +0,0 @@ -import argparse -import os -import json - -import numpy as np -from PIL import Image -from tqdm import tqdm - -# calculating least square problem for image alignment -def POS(xp, x): - npts = xp.shape[1] - - A = np.zeros([2*npts, 8]) - - A[0:2*npts-1:2, 0:3] = x.transpose() - A[0:2*npts-1:2, 3] = 1 - - A[1:2*npts:2, 4:7] = x.transpose() - A[1:2*npts:2, 7] = 1 - - b = np.reshape(xp.transpose(), [2*npts, 1]) - - k, _, _, _ = np.linalg.lstsq(A, b) - - R1 = k[0:3] - R2 = k[4:7] - sTx = k[3] - sTy = k[7] - s = (np.linalg.norm(R1) + np.linalg.norm(R2))/2 - t = np.stack([sTx, sTy], axis=0) - - return t, s - -def extract_5p(lm): - lm_idx = np.array([31, 37, 40, 43, 46, 49, 55]) - 1 - lm5p = np.stack([lm[lm_idx[0], :], np.mean(lm[lm_idx[[1, 2]], :], 0), np.mean( - lm[lm_idx[[3, 4]], :], 0), lm[lm_idx[5], :], lm[lm_idx[6], :]], axis=0) - lm5p = lm5p[[1, 2, 0, 3, 4], :] - return lm5p - -# resize and crop images for face reconstruction -def resize_n_crop_img(img, lm, t, s, target_size=1024., mask=None): - w0, h0 = img.size - w = (w0*s).astype(np.int32) - h = (h0*s).astype(np.int32) - left = (w/2 - target_size/2 + float((t[0] - w0/2)*s)).astype(np.int32) - right = left + target_size - up = (h/2 - target_size/2 + float((h0/2 - t[1])*s)).astype(np.int32) - below = up + target_size - img = img.resize((w, h), resample=Image.LANCZOS) - img = img.crop((left, up, right, below)) - - if mask is not None: - mask = mask.resize((w, h), resample=Image.LANCZOS) - mask = mask.crop((left, up, right, below)) - - lm = np.stack([lm[:, 0] - t[0] + w0/2, lm[:, 1] - - t[1] + h0/2], axis=1)*s - lm = lm - np.reshape( - np.array([(w/2 - target_size/2), (h/2-target_size/2)]), [1, 2]) - return img, lm, mask - - -# utils for face reconstruction -def align_img(img, lm, lm3D, mask=None, target_size=1024., rescale_factor=466.285): - """ - Return: - transparams --numpy.array (raw_W, raw_H, scale, tx, ty) - img_new --PIL.Image (target_size, target_size, 3) - lm_new --numpy.array (68, 2), y direction is opposite to v direction - mask_new --PIL.Image (target_size, target_size) - - Parameters: - img --PIL.Image (raw_H, raw_W, 3) - lm --numpy.array (68, 2), y direction is opposite to v direction - lm3D --numpy.array (5, 3) - mask --PIL.Image (raw_H, raw_W, 3) - """ - - w0, h0 = img.size - if lm.shape[0] != 5: - lm5p = extract_5p(lm) - else: - lm5p = lm - - # calculate translation and scale factors using 5 facial landmarks and standard landmarks of a 3D face - t, s = POS(lm5p.transpose(), lm3D.transpose()) - s = rescale_factor/s - - # processing the image - img_new, lm_new, mask_new = resize_n_crop_img(img, lm, t, s, target_size=target_size, mask=mask) - #img_new = img.resize((1024,1024),resample=Image.LANCZOS) - #lm_new = lm*1024.0/512.0 - #mask_new=None - # img.save("/home/koki/Projects/Deep3DFaceRecon_pytorch/checkpoints/pretrained/results/iphone/epoch_20_000000/img_new.jpg") - trans_params = np.array([w0, h0, s, t[0][0], t[1][0]]) - lm_new *= 224/1024.0 - img_new_low = img_new.resize((224, 224), resample=Image.LANCZOS) - - return trans_params, img_new_low, lm_new, mask_new, img_new - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--indir', type=str, required=True) - parser.add_argument('--outdir', type=str, required=True) - parser.add_argument('--compress_level', type=int, default=0) - args = parser.parse_args() - - with open(os.path.join(args.indir, 'cropping_params.json')) as f: - cropping_params = json.load(f) - - os.makedirs(args.outdir, exist_ok=True) - - for im_path, cropping_dict in tqdm(cropping_params.items()): - im = Image.open(os.path.join(args.indir, im_path)).convert('RGB') - - _, H = im.size - lm = np.array(cropping_dict['lm']) - lm = lm.reshape([-1, 2]) - lm[:, -1] = H - 1 - lm[:, -1] - - _, im_pil, lm, _, im_high = align_img(im, lm, np.array(cropping_dict['lm3d_std']), rescale_factor=cropping_dict['rescale_factor']) - - left = int(im_high.size[0]/2 - cropping_dict['center_crop_size']/2) - upper = int(im_high.size[1]/2 - cropping_dict['center_crop_size']/2) - right = left + cropping_dict['center_crop_size'] - lower = upper + cropping_dict['center_crop_size'] - im_cropped = im_high.crop((left, upper, right,lower)) - im_cropped = im_cropped.resize((cropping_dict['output_size'], cropping_dict['output_size']), resample=Image.LANCZOS) - - im_cropped.save(os.path.join(args.outdir, os.path.basename(im_path)), compress_level=args.compress_level) \ No newline at end of file diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/dnnlib/tflib/network.py b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/dnnlib/tflib/network.py deleted file mode 100644 index f663a9a117661a56438d8a033903f18941319a83..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/dnnlib/tflib/network.py +++ /dev/null @@ -1,658 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2019, NVIDIA Corporation. All rights reserved. -# -# This work is made available under the Nvidia Source Code License-NC. -# To view a copy of this license, visit -# https://nvlabs.github.io/stylegan2/license.html - -"""Helper for managing networks.""" - -import types -import inspect -import re -import uuid -import sys -import numpy as np -import tensorflow as tf - -from collections import OrderedDict -from typing import Any, List, Tuple, Union - -from . import tfutil -from .. import util - -from .tfutil import TfExpression, TfExpressionEx - -# Custom import handlers for dealing with legacy data in pickle import. -_import_handlers = [] -# Source code for temporary modules created during pickle import. -_import_module_src = dict() - - -def import_handler(handler_func): - """Function decorator for declaring custom import handlers.""" - _import_handlers.append(handler_func) - return handler_func - - -class Network: - """Generic network abstraction. - - Acts as a convenience wrapper for a parameterized network construction - function, providing several utility methods and convenient access to - the inputs/outputs/weights. - - Network objects can be safely pickled and unpickled for long-term - archival purposes. The pickling works reliably as long as the underlying - network construction function is defined in a standalone Python module - that has no side effects or application-specific imports. - - Args: - name: Network name. Used to select TensorFlow name and variable scopes. - func_name: Fully qualified name of the underlying network construction function, or a top-level function object. - static_kwargs: Keyword arguments to be passed in to the network construction function. - - Attributes: - name: User-specified name, defaults to build func name if None. - scope: Unique TensorFlow scope containing template graph and variables, derived from the user-specified name. - static_kwargs: Arguments passed to the user-supplied build func. - components: Container for sub-networks. Passed to the build func, and retained between calls. - num_inputs: Number of input tensors. - num_outputs: Number of output tensors. - input_shapes: Input tensor shapes (NC or NCHW), including minibatch dimension. - output_shapes: Output tensor shapes (NC or NCHW), including minibatch dimension. - input_shape: Short-hand for input_shapes[0]. - output_shape: Short-hand for output_shapes[0]. - input_templates: Input placeholders in the template graph. - output_templates: Output tensors in the template graph. - input_names: Name string for each input. - output_names: Name string for each output. - own_vars: Variables defined by this network (local_name => var), excluding sub-networks. - vars: All variables (local_name => var). - trainables: All trainable variables (local_name => var). - var_global_to_local: Mapping from variable global names to local names. - """ - - def __init__(self, name: str = None, func_name: Any = None, **static_kwargs): - tfutil.assert_tf_initialized() - assert isinstance(name, str) or name is None - assert func_name is not None - assert isinstance( - func_name, str) or util.is_top_level_function(func_name) - assert util.is_pickleable(static_kwargs) - - self._init_fields() - self.name = name - self.static_kwargs = util.EasyDict(static_kwargs) - - # Locate the user-specified network build function. - if util.is_top_level_function(func_name): - func_name = util.get_top_level_function_name(func_name) - module, self._build_func_name = util.get_module_from_obj_name( - func_name) - self._build_func = util.get_obj_from_module( - module, self._build_func_name) - assert callable(self._build_func) - - # Dig up source code for the module containing the build function. - self._build_module_src = _import_module_src.get(module, None) - if self._build_module_src is None: - self._build_module_src = inspect.getsource(module) - - # Init TensorFlow graph. - self._init_graph() - self.reset_own_vars() - - def _init_fields(self) -> None: - self.name = None - self.scope = None - self.static_kwargs = util.EasyDict() - self.components = util.EasyDict() - self.num_inputs = 0 - self.num_outputs = 0 - self.input_shapes = [[]] - self.output_shapes = [[]] - self.input_shape = [] - self.output_shape = [] - self.input_templates = [] - self.output_templates = [] - self.input_names = [] - self.output_names = [] - self.own_vars = OrderedDict() - self.vars = OrderedDict() - self.trainables = OrderedDict() - self.var_global_to_local = OrderedDict() - - # User-supplied build function that constructs the network. - self._build_func = None - self._build_func_name = None # Name of the build function. - # Full source code of the module containing the build function. - self._build_module_src = None - self._run_cache = dict() # Cached graph data for Network.run(). - - def _init_graph(self) -> None: - # Collect inputs. - self.input_names = [] - - for param in inspect.signature(self._build_func).parameters.values(): - if param.kind == param.POSITIONAL_OR_KEYWORD and param.default is param.empty: - self.input_names.append(param.name) - - self.num_inputs = len(self.input_names) - assert self.num_inputs >= 1 - - # Choose name and scope. - if self.name is None: - self.name = self._build_func_name - assert re.match("^[A-Za-z0-9_.\\-]*$", self.name) - with tf.name_scope(None): - self.scope = tf.get_default_graph().unique_name(self.name, mark_as_used=True) - - # Finalize build func kwargs. - build_kwargs = dict(self.static_kwargs) - build_kwargs["is_template_graph"] = True - build_kwargs["components"] = self.components - - # Build template graph. - # ignore surrounding scopes - with tfutil.absolute_variable_scope(self.scope, reuse=False), tfutil.absolute_name_scope(self.scope): - assert tf.get_variable_scope().name == self.scope - assert tf.get_default_graph().get_name_scope() == self.scope - # ignore surrounding control dependencies - with tf.control_dependencies(None): - self.input_templates = [tf.placeholder( - tf.float32, name=name) for name in self.input_names] - out_expr = self._build_func( - *self.input_templates, **build_kwargs) - - # Collect outputs. - assert tfutil.is_tf_expression(out_expr) or isinstance(out_expr, tuple) - self.output_templates = [out_expr] if tfutil.is_tf_expression( - out_expr) else list(out_expr) - self.num_outputs = len(self.output_templates) - assert self.num_outputs >= 1 - assert all(tfutil.is_tf_expression(t) for t in self.output_templates) - - # Perform sanity checks. - if any(t.shape.ndims is None for t in self.input_templates): - raise ValueError( - "Network input shapes not defined. Please call x.set_shape() for each input.") - if any(t.shape.ndims is None for t in self.output_templates): - raise ValueError( - "Network output shapes not defined. Please call x.set_shape() where applicable.") - if any(not isinstance(comp, Network) for comp in self.components.values()): - raise ValueError( - "Components of a Network must be Networks themselves.") - if len(self.components) != len(set(comp.name for comp in self.components.values())): - raise ValueError("Components of a Network must have unique names.") - - # List inputs and outputs. - self.input_shapes = [t.shape.as_list() for t in self.input_templates] - self.output_shapes = [t.shape.as_list() for t in self.output_templates] - self.input_shape = self.input_shapes[0] - self.output_shape = self.output_shapes[0] - self.output_names = [t.name.split( - "/")[-1].split(":")[0] for t in self.output_templates] - - # List variables. - self.own_vars = OrderedDict((var.name[len( - self.scope) + 1:].split(":")[0], var) for var in tf.global_variables(self.scope + "/")) - self.vars = OrderedDict(self.own_vars) - self.vars.update((comp.name + "/" + name, var) - for comp in self.components.values() for name, var in comp.vars.items()) - self.trainables = OrderedDict( - (name, var) for name, var in self.vars.items() if var.trainable) - self.var_global_to_local = OrderedDict( - (var.name.split(":")[0], name) for name, var in self.vars.items()) - - def reset_own_vars(self) -> None: - """Re-initialize all variables of this network, excluding sub-networks.""" - tfutil.run([var.initializer for var in self.own_vars.values()]) - - def reset_vars(self) -> None: - """Re-initialize all variables of this network, including sub-networks.""" - tfutil.run([var.initializer for var in self.vars.values()]) - - def reset_trainables(self) -> None: - """Re-initialize all trainable variables of this network, including sub-networks.""" - tfutil.run([var.initializer for var in self.trainables.values()]) - - def get_output_for(self, *in_expr: TfExpression, return_as_list: bool = False, **dynamic_kwargs) -> Union[TfExpression, List[TfExpression]]: - """Construct TensorFlow expression(s) for the output(s) of this network, given the input expression(s).""" - assert len(in_expr) == self.num_inputs - assert not all(expr is None for expr in in_expr) - - # Finalize build func kwargs. - build_kwargs = dict(self.static_kwargs) - build_kwargs.update(dynamic_kwargs) - build_kwargs["is_template_graph"] = False - build_kwargs["components"] = self.components - - # Build TensorFlow graph to evaluate the network. - with tfutil.absolute_variable_scope(self.scope, reuse=True), tf.name_scope(self.name): - assert tf.get_variable_scope().name == self.scope - valid_inputs = [expr for expr in in_expr if expr is not None] - final_inputs = [] - for expr, name, shape in zip(in_expr, self.input_names, self.input_shapes): - if expr is not None: - expr = tf.identity(expr, name=name) - else: - expr = tf.zeros([tf.shape(valid_inputs[0])[ - 0]] + shape[1:], name=name) - final_inputs.append(expr) - out_expr = self._build_func(*final_inputs, **build_kwargs) - - # Propagate input shapes back to the user-specified expressions. - for expr, final in zip(in_expr, final_inputs): - if isinstance(expr, tf.Tensor): - expr.set_shape(final.shape) - - # Express outputs in the desired format. - assert tfutil.is_tf_expression(out_expr) or isinstance(out_expr, tuple) - if return_as_list: - out_expr = [out_expr] if tfutil.is_tf_expression( - out_expr) else list(out_expr) - return out_expr - - def get_var_local_name(self, var_or_global_name: Union[TfExpression, str]) -> str: - """Get the local name of a given variable, without any surrounding name scopes.""" - assert tfutil.is_tf_expression( - var_or_global_name) or isinstance(var_or_global_name, str) - global_name = var_or_global_name if isinstance( - var_or_global_name, str) else var_or_global_name.name - return self.var_global_to_local[global_name] - - def find_var(self, var_or_local_name: Union[TfExpression, str]) -> TfExpression: - """Find variable by local or global name.""" - assert tfutil.is_tf_expression( - var_or_local_name) or isinstance(var_or_local_name, str) - return self.vars[var_or_local_name] if isinstance(var_or_local_name, str) else var_or_local_name - - def get_var(self, var_or_local_name: Union[TfExpression, str]) -> np.ndarray: - """Get the value of a given variable as NumPy array. - Note: This method is very inefficient -- prefer to use tflib.run(list_of_vars) whenever possible.""" - return self.find_var(var_or_local_name).eval() - - def set_var(self, var_or_local_name: Union[TfExpression, str], new_value: Union[int, float, np.ndarray]) -> None: - """Set the value of a given variable based on the given NumPy array. - Note: This method is very inefficient -- prefer to use tflib.set_vars() whenever possible.""" - tfutil.set_vars({self.find_var(var_or_local_name): new_value}) - - def __getstate__(self) -> dict: - """Pickle export.""" - state = dict() - state["version"] = 4 - state["name"] = self.name - state["static_kwargs"] = dict(self.static_kwargs) - state["components"] = dict(self.components) - state["build_module_src"] = self._build_module_src - state["build_func_name"] = self._build_func_name - state["variables"] = list( - zip(self.own_vars.keys(), tfutil.run(list(self.own_vars.values())))) - return state - - def __setstate__(self, state: dict) -> None: - """Pickle import.""" - # pylint: disable=attribute-defined-outside-init - tfutil.assert_tf_initialized() - self._init_fields() - - # Execute custom import handlers. - for handler in _import_handlers: - state = handler(state) - - # Set basic fields. - assert state["version"] in [2, 3, 4] - self.name = state["name"] - self.static_kwargs = util.EasyDict(state["static_kwargs"]) - self.components = util.EasyDict(state.get("components", {})) - self._build_module_src = state["build_module_src"] - self._build_func_name = state["build_func_name"] - - # Create temporary module from the imported source code. - module_name = "_tflib_network_import_" + uuid.uuid4().hex - module = types.ModuleType(module_name) - sys.modules[module_name] = module - _import_module_src[module] = self._build_module_src - exec(self._build_module_src, module.__dict__) # pylint: disable=exec-used - - # Locate network build function in the temporary module. - self._build_func = util.get_obj_from_module( - module, self._build_func_name) - assert callable(self._build_func) - - # Init TensorFlow graph. - self._init_graph() - self.reset_own_vars() - tfutil.set_vars({self.find_var(name): value for name, - value in state["variables"]}) - - def clone(self, name: str = None, **new_static_kwargs) -> "Network": - """Create a clone of this network with its own copy of the variables.""" - # pylint: disable=protected-access - net = object.__new__(Network) - net._init_fields() - net.name = name if name is not None else self.name - net.static_kwargs = util.EasyDict(self.static_kwargs) - net.static_kwargs.update(new_static_kwargs) - net._build_module_src = self._build_module_src - net._build_func_name = self._build_func_name - net._build_func = self._build_func - net._init_graph() - net.copy_vars_from(self) - return net - - def copy_own_vars_from(self, src_net: "Network") -> None: - """Copy the values of all variables from the given network, excluding sub-networks.""" - names = [name for name in self.own_vars.keys() - if name in src_net.own_vars] - tfutil.set_vars(tfutil.run( - {self.vars[name]: src_net.vars[name] for name in names})) - - def copy_vars_from(self, src_net: "Network") -> None: - """Copy the values of all variables from the given network, including sub-networks.""" - names = [name for name in self.vars.keys() if name in src_net.vars] - tfutil.set_vars(tfutil.run( - {self.vars[name]: src_net.vars[name] for name in names})) - - def copy_trainables_from(self, src_net: "Network") -> None: - """Copy the values of all trainable variables from the given network, including sub-networks.""" - names = [name for name in self.trainables.keys() - if name in src_net.trainables] - tfutil.set_vars(tfutil.run( - {self.vars[name]: src_net.vars[name] for name in names})) - - def convert(self, new_func_name: str, new_name: str = None, **new_static_kwargs) -> "Network": - """Create new network with the given parameters, and copy all variables from this network.""" - if new_name is None: - new_name = self.name - static_kwargs = dict(self.static_kwargs) - static_kwargs.update(new_static_kwargs) - net = Network(name=new_name, func_name=new_func_name, **static_kwargs) - net.copy_vars_from(self) - return net - - def setup_as_moving_average_of(self, src_net: "Network", beta: TfExpressionEx = 0.99, beta_nontrainable: TfExpressionEx = 0.0) -> tf.Operation: - """Construct a TensorFlow op that updates the variables of this network - to be slightly closer to those of the given network.""" - with tfutil.absolute_name_scope(self.scope + "/_MovingAvg"): - ops = [] - for name, var in self.vars.items(): - if name in src_net.vars: - cur_beta = beta if name in self.trainables else beta_nontrainable - new_value = tfutil.lerp(src_net.vars[name], var, cur_beta) - ops.append(var.assign(new_value)) - return tf.group(*ops) - - def run(self, - *in_arrays: Tuple[Union[np.ndarray, None], ...], - input_transform: dict = None, - output_transform: dict = None, - return_as_list: bool = False, - print_progress: bool = False, - minibatch_size: int = None, - num_gpus: int = 1, - assume_frozen: bool = False, - **dynamic_kwargs) -> Union[np.ndarray, Tuple[np.ndarray, ...], List[np.ndarray]]: - """Run this network for the given NumPy array(s), and return the output(s) as NumPy array(s). - - Args: - input_transform: A dict specifying a custom transformation to be applied to the input tensor(s) before evaluating the network. - The dict must contain a 'func' field that points to a top-level function. The function is called with the input - TensorFlow expression(s) as positional arguments. Any remaining fields of the dict will be passed in as kwargs. - output_transform: A dict specifying a custom transformation to be applied to the output tensor(s) after evaluating the network. - The dict must contain a 'func' field that points to a top-level function. The function is called with the output - TensorFlow expression(s) as positional arguments. Any remaining fields of the dict will be passed in as kwargs. - return_as_list: True = return a list of NumPy arrays, False = return a single NumPy array, or a tuple if there are multiple outputs. - print_progress: Print progress to the console? Useful for very large input arrays. - minibatch_size: Maximum minibatch size to use, None = disable batching. - num_gpus: Number of GPUs to use. - assume_frozen: Improve multi-GPU performance by assuming that the trainable parameters will remain changed between calls. - dynamic_kwargs: Additional keyword arguments to be passed into the network build function. - """ - assert len(in_arrays) == self.num_inputs - assert not all(arr is None for arr in in_arrays) - assert input_transform is None or util.is_top_level_function( - input_transform["func"]) - assert output_transform is None or util.is_top_level_function( - output_transform["func"]) - output_transform, dynamic_kwargs = _handle_legacy_output_transforms( - output_transform, dynamic_kwargs) - num_items = in_arrays[0].shape[0] - if minibatch_size is None: - minibatch_size = num_items - - # Construct unique hash key from all arguments that affect the TensorFlow graph. - key = dict(input_transform=input_transform, output_transform=output_transform, - num_gpus=num_gpus, assume_frozen=assume_frozen, dynamic_kwargs=dynamic_kwargs) - - def unwind_key(obj): - if isinstance(obj, dict): - return [(key, unwind_key(value)) for key, value in sorted(obj.items())] - if callable(obj): - return util.get_top_level_function_name(obj) - return obj - key = repr(unwind_key(key)) - - # Build graph. - if key not in self._run_cache: - with tfutil.absolute_name_scope(self.scope + "/_Run"), tf.control_dependencies(None): - with tf.device("/cpu:0"): - in_expr = [tf.placeholder(tf.float32, name=name) - for name in self.input_names] - in_split = list( - zip(*[tf.split(x, num_gpus) for x in in_expr])) - - out_split = [] - for gpu in range(num_gpus): - with tf.device("/gpu:%d" % gpu): - net_gpu = self.clone() if assume_frozen else self - in_gpu = in_split[gpu] - - if input_transform is not None: - in_kwargs = dict(input_transform) - in_gpu = in_kwargs.pop("func")( - *in_gpu, **in_kwargs) - in_gpu = [in_gpu] if tfutil.is_tf_expression( - in_gpu) else list(in_gpu) - - assert len(in_gpu) == self.num_inputs - out_gpu = net_gpu.get_output_for( - *in_gpu, return_as_list=True, **dynamic_kwargs) - - if output_transform is not None: - out_kwargs = dict(output_transform) - out_gpu = out_kwargs.pop("func")( - *out_gpu, **out_kwargs) - out_gpu = [out_gpu] if tfutil.is_tf_expression( - out_gpu) else list(out_gpu) - - assert len(out_gpu) == self.num_outputs - out_split.append(out_gpu) - - with tf.device("/cpu:0"): - out_expr = [tf.concat(outputs, axis=0) - for outputs in zip(*out_split)] - self._run_cache[key] = in_expr, out_expr - - # Run minibatches. - in_expr, out_expr = self._run_cache[key] - out_arrays = [np.empty( - [num_items] + expr.shape.as_list()[1:], expr.dtype.name) for expr in out_expr] - - for mb_begin in range(0, num_items, minibatch_size): - if print_progress: - print("\r%d / %d" % (mb_begin, num_items), end="") - - mb_end = min(mb_begin + minibatch_size, num_items) - mb_num = mb_end - mb_begin - mb_in = [src[mb_begin: mb_end] if src is not None else np.zeros( - [mb_num] + shape[1:]) for src, shape in zip(in_arrays, self.input_shapes)] - mb_out = tf.get_default_session().run(out_expr, dict(zip(in_expr, mb_in))) - - for dst, src in zip(out_arrays, mb_out): - dst[mb_begin: mb_end] = src - - # Done. - if print_progress: - print("\r%d / %d" % (num_items, num_items)) - - if not return_as_list: - out_arrays = out_arrays[0] if len( - out_arrays) == 1 else tuple(out_arrays) - return out_arrays - - def list_ops(self) -> List[TfExpression]: - include_prefix = self.scope + "/" - exclude_prefix = include_prefix + "_" - ops = tf.get_default_graph().get_operations() - ops = [op for op in ops if op.name.startswith(include_prefix)] - ops = [op for op in ops if not op.name.startswith(exclude_prefix)] - return ops - - def list_layers(self) -> List[Tuple[str, TfExpression, List[TfExpression]]]: - """Returns a list of (layer_name, output_expr, trainable_vars) tuples corresponding to - individual layers of the network. Mainly intended to be used for reporting.""" - layers = [] - - def recurse(scope, parent_ops, parent_vars, level): - # Ignore specific patterns. - if any(p in scope for p in ["/Shape", "/strided_slice", "/Cast", "/concat", "/Assign"]): - return - - # Filter ops and vars by scope. - global_prefix = scope + "/" - local_prefix = global_prefix[len(self.scope) + 1:] - cur_ops = [op for op in parent_ops if op.name.startswith( - global_prefix) or op.name == global_prefix[:-1]] - cur_vars = [(name, var) for name, var in parent_vars if name.startswith( - local_prefix) or name == local_prefix[:-1]] - if not cur_ops and not cur_vars: - return - - # Filter out all ops related to variables. - for var in [op for op in cur_ops if op.type.startswith("Variable")]: - var_prefix = var.name + "/" - cur_ops = [ - op for op in cur_ops if not op.name.startswith(var_prefix)] - - # Scope does not contain ops as immediate children => recurse deeper. - contains_direct_ops = any("/" not in op.name[len(global_prefix):] and op.type not in [ - "Identity", "Cast", "Transpose"] for op in cur_ops) - if (level == 0 or not contains_direct_ops) and (len(cur_ops) + len(cur_vars)) > 1: - visited = set() - for rel_name in [op.name[len(global_prefix):] for op in cur_ops] + [name[len(local_prefix):] for name, _var in cur_vars]: - token = rel_name.split("/")[0] - if token not in visited: - recurse(global_prefix + token, - cur_ops, cur_vars, level + 1) - visited.add(token) - return - - # Report layer. - layer_name = scope[len(self.scope) + 1:] - layer_output = cur_ops[-1].outputs[0] if cur_ops else cur_vars[-1][1] - layer_trainables = [var for _name, - var in cur_vars if var.trainable] - layers.append((layer_name, layer_output, layer_trainables)) - - recurse(self.scope, self.list_ops(), list(self.vars.items()), 0) - return layers - - def print_layers(self, title: str = None, hide_layers_with_no_params: bool = False) -> None: - """Print a summary table of the network structure.""" - rows = [[title if title is not None else self.name, - "Params", "OutputShape", "WeightShape"]] - rows += [["---"] * 4] - total_params = 0 - - for layer_name, layer_output, layer_trainables in self.list_layers(): - num_params = sum(int(np.prod(var.shape.as_list())) - for var in layer_trainables) - weights = [ - var for var in layer_trainables if var.name.endswith("/weight:0")] - weights.sort(key=lambda x: len(x.name)) - if len(weights) == 0 and len(layer_trainables) == 1: - weights = layer_trainables - total_params += num_params - - if not hide_layers_with_no_params or num_params != 0: - num_params_str = str(num_params) if num_params > 0 else "-" - output_shape_str = str(layer_output.shape) - weight_shape_str = str(weights[0].shape) if len( - weights) >= 1 else "-" - rows += [[layer_name, num_params_str, - output_shape_str, weight_shape_str]] - - rows += [["---"] * 4] - rows += [["Total", str(total_params), "", ""]] - - widths = [max(len(cell) for cell in column) for column in zip(*rows)] - print() - for row in rows: - print(" ".join(cell + " " * (width - len(cell)) - for cell, width in zip(row, widths))) - print() - - def setup_weight_histograms(self, title: str = None) -> None: - """Construct summary ops to include histograms of all trainable parameters in TensorBoard.""" - if title is None: - title = self.name - - with tf.name_scope(None), tf.device(None), tf.control_dependencies(None): - for local_name, var in self.trainables.items(): - if "/" in local_name: - p = local_name.split("/") - name = title + "_" + p[-1] + "/" + "_".join(p[:-1]) - else: - name = title + "_toplevel/" + local_name - - tf.summary.histogram(name, var) - -# ---------------------------------------------------------------------------- -# Backwards-compatible emulation of legacy output transformation in Network.run(). - - -_print_legacy_warning = True - - -def _handle_legacy_output_transforms(output_transform, dynamic_kwargs): - global _print_legacy_warning - legacy_kwargs = ["out_mul", "out_add", "out_shrink", "out_dtype"] - if not any(kwarg in dynamic_kwargs for kwarg in legacy_kwargs): - return output_transform, dynamic_kwargs - - if _print_legacy_warning: - _print_legacy_warning = False - print() - print("WARNING: Old-style output transformations in Network.run() are deprecated.") - print("Consider using 'output_transform=dict(func=tflib.convert_images_to_uint8)'") - print("instead of 'out_mul=127.5, out_add=127.5, out_dtype=np.uint8'.") - print() - assert output_transform is None - - new_kwargs = dict(dynamic_kwargs) - new_transform = {kwarg: new_kwargs.pop( - kwarg) for kwarg in legacy_kwargs if kwarg in dynamic_kwargs} - new_transform["func"] = _legacy_output_transform_func - return new_transform, new_kwargs - - -def _legacy_output_transform_func(*expr, out_mul=1.0, out_add=0.0, out_shrink=1, out_dtype=None): - if out_mul != 1.0: - expr = [x * out_mul for x in expr] - - if out_add != 0.0: - expr = [x + out_add for x in expr] - - if out_shrink > 1: - ksize = [1, 1, out_shrink, out_shrink] - expr = [tf.nn.avg_pool(x, ksize=ksize, strides=ksize, - padding="VALID", data_format="NCHW") for x in expr] - - if out_dtype is not None: - if tf.as_dtype(out_dtype).is_integer: - expr = [tf.round(x) for x in expr] - expr = [tf.saturate_cast(x, out_dtype) for x in expr] - return expr diff --git a/spaces/hamacojr/CAT-Seg/open_clip/Makefile b/spaces/hamacojr/CAT-Seg/open_clip/Makefile deleted file mode 100644 index ff07eccefed3d959c77d007d2571e226a07ace60..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/CAT-Seg/open_clip/Makefile +++ /dev/null @@ -1,12 +0,0 @@ -install: ## [Local development] Upgrade pip, install requirements, install package. - python -m pip install -U pip - python -m pip install -e . - -install-training: - python -m pip install -r requirements-training.txt - -install-test: ## [Local development] Install test requirements - python -m pip install -r requirements-test.txt - -test: ## [Local development] Run unit tests - python -m pytest -x -s -v tests diff --git a/spaces/hamacojr/SAM-CAT-Seg/open_clip/tests/test_inference.py b/spaces/hamacojr/SAM-CAT-Seg/open_clip/tests/test_inference.py deleted file mode 100644 index ecd46d07271893e209a0050377129129fc75ef6b..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/SAM-CAT-Seg/open_clip/tests/test_inference.py +++ /dev/null @@ -1,82 +0,0 @@ - -import os -import pytest -import torch -import open_clip -import util_test - -os.environ['CUDA_VISIBLE_DEVICES'] = '' - -models_to_test = set(open_clip.list_models()) - -# testing excemptions -models_to_test = models_to_test.difference({ - # not available with timm yet - # see https://github.com/mlfoundations/open_clip/issues/219 - 'convnext_xlarge', - 'convnext_xxlarge', - 'convnext_xxlarge_320', - 'vit_medium_patch16_gap_256', - # exceeds GH runner memory limit - 'ViT-bigG-14', - 'ViT-e-14', - 'mt5-xl-ViT-H-14', -}) - -if 'OPEN_CLIP_TEST_REG_MODELS' in os.environ: - external_model_list = os.environ['OPEN_CLIP_TEST_REG_MODELS'] - with open(external_model_list, 'r') as f: - models_to_test = set(f.read().splitlines()).intersection(models_to_test) - print(f"Selected models from {external_model_list}: {models_to_test}") - -models_to_test = list(models_to_test) -models_to_test.sort() - -@pytest.mark.regression_test -@pytest.mark.parametrize('model_name', models_to_test) -def test_inference_with_data( - model_name, - pretrained = None, - pretrained_hf = False, - precision = 'fp32', - jit = False, - force_quick_gelu = False, -): - util_test.seed_all() - model, _, preprocess_val = open_clip.create_model_and_transforms( - model_name, - pretrained = pretrained, - precision = precision, - jit = jit, - force_quick_gelu = force_quick_gelu, - pretrained_hf = pretrained_hf - ) - model_id = f'{model_name}_{pretrained or pretrained_hf}_{precision}' - input_dir, output_dir = util_test.get_data_dirs() - # text - input_text_path = os.path.join(input_dir, 'random_text.pt') - gt_text_path = os.path.join(output_dir, f'{model_id}_random_text.pt') - if not os.path.isfile(input_text_path): - pytest.skip(reason = f"missing test data, expected at {input_text_path}") - if not os.path.isfile(gt_text_path): - pytest.skip(reason = f"missing test data, expected at {gt_text_path}") - input_text = torch.load(input_text_path) - gt_text = torch.load(gt_text_path) - y_text = util_test.inference_text(model, model_name, input_text) - assert (y_text == gt_text).all(), f"text output differs @ {input_text_path}" - # image - image_size = model.visual.image_size - if not isinstance(image_size, tuple): - image_size = (image_size, image_size) - input_image_path = os.path.join(input_dir, f'random_image_{image_size[0]}_{image_size[1]}.pt') - gt_image_path = os.path.join(output_dir, f'{model_id}_random_image.pt') - if not os.path.isfile(input_image_path): - pytest.skip(reason = f"missing test data, expected at {input_image_path}") - if not os.path.isfile(gt_image_path): - pytest.skip(reason = f"missing test data, expected at {gt_image_path}") - input_image = torch.load(input_image_path) - gt_image = torch.load(gt_image_path) - y_image = util_test.inference_image(model, preprocess_val, input_image) - assert (y_image == gt_image).all(), f"image output differs @ {input_image_path}" - - diff --git "a/spaces/hands012/gpt-academic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" "b/spaces/hands012/gpt-academic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" deleted file mode 100644 index eada69dc65587782125c0809381260a6bbdce225..0000000000000000000000000000000000000000 --- "a/spaces/hands012/gpt-academic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" +++ /dev/null @@ -1,127 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -fast_debug = False - - -def 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import time, os - # pip install python-docx 用于docx格式,跨平台 - # pip install pywin32 用于doc格式,仅支持Win平台 - for index, fp in enumerate(file_manifest): - if fp.split(".")[-1] == "docx": - from docx import Document - doc = Document(fp) - file_content = "\n".join([para.text for para in doc.paragraphs]) - else: - import win32com.client - word = win32com.client.Dispatch("Word.Application") - word.visible = False - # 打开文件 - print('fp', os.getcwd()) - doc = word.Documents.Open(os.getcwd() + '/' + fp) - # file_content = doc.Content.Text - doc = word.ActiveDocument - file_content = doc.Range().Text - doc.Close() - word.Quit() - - print(file_content) - # private_upload里面的文件名在解压zip后容易出现乱码(rar和7z格式正常),故可以只分析文章内容,不输入文件名 - from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf - from request_llm.bridge_all import model_info - max_token = model_info[llm_kwargs['llm_model']]['max_token'] - TOKEN_LIMIT_PER_FRAGMENT = max_token * 3 // 4 - paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf( - txt=file_content, - get_token_fn=model_info[llm_kwargs['llm_model']]['token_cnt'], - limit=TOKEN_LIMIT_PER_FRAGMENT - ) - this_paper_history = [] - for i, paper_frag in enumerate(paper_fragments): - i_say = f'请对下面的文章片段用中文做概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{paper_frag}```' - i_say_show_user = f'请对下面的文章片段做概述: {os.path.abspath(fp)}的第{i+1}/{len(paper_fragments)}个片段。' - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say_show_user, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=[], - sys_prompt="总结文章。" - ) - - chatbot[-1] = (i_say_show_user, gpt_say) - history.extend([i_say_show_user,gpt_say]) - this_paper_history.extend([i_say_show_user,gpt_say]) - - # 已经对该文章的所有片段总结完毕,如果文章被切分了, - if len(paper_fragments) > 1: - i_say = f"根据以上的对话,总结文章{os.path.abspath(fp)}的主要内容。" - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=this_paper_history, - sys_prompt="总结文章。" - ) - - history.extend([i_say,gpt_say]) - this_paper_history.extend([i_say,gpt_say]) - - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - res = write_results_to_file(history) - chatbot.append(("所有文件都总结完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - -@CatchException -def 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - import glob, os - - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "批量总结Word文档。函数插件贡献者: JasonGuo1。注意, 如果是.doc文件, 请先转化为.docx格式。"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - from docx import Document - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade python-docx pywin32```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 清空历史,以免输入溢出 - history = [] - - # 检测输入参数,如没有给定输入参数,直接退出 - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 搜索需要处理的文件清单 - if txt.endswith('.docx') or txt.endswith('.doc'): - file_manifest = [txt] - else: - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.docx', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.doc', recursive=True)] - - # 如果没找到任何文件 - if len(file_manifest) == 0: - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.docx或doc文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 开始正式执行任务 - yield from 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/layers/csrc/ROIAlign/ROIAlign_cpu.cpp b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/layers/csrc/ROIAlign/ROIAlign_cpu.cpp deleted file mode 100644 index 52fc83f8140b29de7b2ad3cb490b8cb672959e16..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/layers/csrc/ROIAlign/ROIAlign_cpu.cpp +++ /dev/null @@ -1,508 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -#include -#include "ROIAlign.h" - -namespace { - -// implementation taken from Caffe2 -template -struct PreCalc { - int pos1; - int pos2; - int pos3; - int pos4; - T w1; - T w2; - T w3; - T w4; -}; - -template -void pre_calc_for_bilinear_interpolate( - const int height, - const int width, - const int pooled_height, - const int pooled_width, - const int iy_upper, - const int ix_upper, - T roi_start_h, - T roi_start_w, - T bin_size_h, - T bin_size_w, - int roi_bin_grid_h, - int roi_bin_grid_w, - std::vector>& pre_calc) { - int pre_calc_index = 0; - for (int ph = 0; ph < pooled_height; ph++) { - for (int pw = 0; pw < pooled_width; pw++) { - for (int iy = 0; iy < iy_upper; iy++) { - const T yy = roi_start_h + ph * bin_size_h + - static_cast(iy + .5f) * bin_size_h / - static_cast(roi_bin_grid_h); // e.g., 0.5, 1.5 - for (int ix = 0; ix < ix_upper; ix++) { - const T xx = roi_start_w + pw * bin_size_w + - static_cast(ix + .5f) * bin_size_w / - static_cast(roi_bin_grid_w); - - T x = xx; - T y = yy; - // deal with: inverse elements are out of feature map boundary - if (y < -1.0 || y > height || x < -1.0 || x > width) { - // empty - PreCalc pc; - pc.pos1 = 0; - pc.pos2 = 0; - pc.pos3 = 0; - pc.pos4 = 0; - pc.w1 = 0; - pc.w2 = 0; - pc.w3 = 0; - pc.w4 = 0; - pre_calc[pre_calc_index] = pc; - pre_calc_index += 1; - continue; - } - - if (y <= 0) { - y = 0; - } - if (x <= 0) { - x = 0; - } - - int y_low = (int)y; - int x_low = (int)x; - int y_high; - int x_high; - - if (y_low >= height - 1) { - y_high = y_low = height - 1; - y = (T)y_low; - } else { - y_high = y_low + 1; - } - - if (x_low >= width - 1) { - x_high = x_low = width - 1; - x = (T)x_low; - } else { - x_high = x_low + 1; - } - - T ly = y - y_low; - T lx = x - x_low; - T hy = 1. - ly, hx = 1. - lx; - T w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx; - - // save weights and indices - PreCalc pc; - pc.pos1 = y_low * width + x_low; - pc.pos2 = y_low * width + x_high; - pc.pos3 = y_high * width + x_low; - pc.pos4 = y_high * width + x_high; - pc.w1 = w1; - pc.w2 = w2; - pc.w3 = w3; - pc.w4 = w4; - pre_calc[pre_calc_index] = pc; - - pre_calc_index += 1; - } - } - } - } -} - -template -void ROIAlignForward( - const int nthreads, - const T* input, - const T& spatial_scale, - const int channels, - const int height, - const int width, - const int pooled_height, - const int pooled_width, - const int sampling_ratio, - const T* rois, - T* output, - bool aligned) { - int n_rois = nthreads / channels / pooled_width / pooled_height; - // (n, c, ph, pw) is an element in the pooled output - // can be parallelized using omp - // #pragma omp parallel for num_threads(32) - for (int n = 0; n < n_rois; n++) { - int index_n = n * channels * pooled_width * pooled_height; - - const T* offset_rois = rois + n * 5; - int roi_batch_ind = offset_rois[0]; - - // Do not use rounding; this implementation detail is critical - T offset = aligned ? (T)0.5 : (T)0.0; - T roi_start_w = offset_rois[1] * spatial_scale - offset; - T roi_start_h = offset_rois[2] * spatial_scale - offset; - T roi_end_w = offset_rois[3] * spatial_scale - offset; - T roi_end_h = offset_rois[4] * spatial_scale - offset; - - T roi_width = roi_end_w - roi_start_w; - T roi_height = roi_end_h - roi_start_h; - if (aligned) { - AT_ASSERTM( - roi_width >= 0 && roi_height >= 0, - "ROIs in ROIAlign cannot have non-negative size!"); - } else { // for backward-compatibility only - roi_width = std::max(roi_width, (T)1.); - roi_height = std::max(roi_height, (T)1.); - } - T bin_size_h = static_cast(roi_height) / static_cast(pooled_height); - T bin_size_w = static_cast(roi_width) / static_cast(pooled_width); - - // We use roi_bin_grid to sample the grid and mimic integral - int roi_bin_grid_h = (sampling_ratio > 0) - ? sampling_ratio - : ceil(roi_height / pooled_height); // e.g., = 2 - int roi_bin_grid_w = - (sampling_ratio > 0) ? sampling_ratio : ceil(roi_width / pooled_width); - - // We do average (integral) pooling inside a bin - // When the grid is empty, output zeros == 0/1, instead of NaN. - const T count = std::max(roi_bin_grid_h * roi_bin_grid_w, 1); // e.g. = 4 - - // we want to precalculate indices and weights shared by all channels, - // this is the key point of optimization - std::vector> pre_calc( - roi_bin_grid_h * roi_bin_grid_w * pooled_width * pooled_height); - pre_calc_for_bilinear_interpolate( - height, - width, - pooled_height, - pooled_width, - roi_bin_grid_h, - roi_bin_grid_w, - roi_start_h, - roi_start_w, - bin_size_h, - bin_size_w, - roi_bin_grid_h, - roi_bin_grid_w, - pre_calc); - - for (int c = 0; c < channels; c++) { - int index_n_c = index_n + c * pooled_width * pooled_height; - const T* offset_input = - input + (roi_batch_ind * channels + c) * height * width; - int pre_calc_index = 0; - - for (int ph = 0; ph < pooled_height; ph++) { - for (int pw = 0; pw < pooled_width; pw++) { - int index = index_n_c + ph * pooled_width + pw; - - T output_val = 0.; - for (int iy = 0; iy < roi_bin_grid_h; iy++) { - for (int ix = 0; ix < roi_bin_grid_w; ix++) { - PreCalc pc = pre_calc[pre_calc_index]; - output_val += pc.w1 * offset_input[pc.pos1] + - pc.w2 * offset_input[pc.pos2] + - pc.w3 * offset_input[pc.pos3] + pc.w4 * offset_input[pc.pos4]; - - pre_calc_index += 1; - } - } - output_val /= count; - - output[index] = output_val; - } // for pw - } // for ph - } // for c - } // for n -} - -template -void bilinear_interpolate_gradient( - const int height, - const int width, - T y, - T x, - T& w1, - T& w2, - T& w3, - T& w4, - int& x_low, - int& x_high, - int& y_low, - int& y_high, - const int index /* index for debug only*/) { - // deal with cases that inverse elements are out of feature map boundary - if (y < -1.0 || y > height || x < -1.0 || x > width) { - // empty - w1 = w2 = w3 = w4 = 0.; - x_low = x_high = y_low = y_high = -1; - return; - } - - if (y <= 0) - y = 0; - if (x <= 0) - x = 0; - - y_low = (int)y; - x_low = (int)x; - - if (y_low >= height - 1) { - y_high = y_low = height - 1; - y = (T)y_low; - } else { - y_high = y_low + 1; - } - - if (x_low >= width - 1) { - x_high = x_low = width - 1; - x = (T)x_low; - } else { - x_high = x_low + 1; - } - - T ly = y - y_low; - T lx = x - x_low; - T hy = 1. - ly, hx = 1. - lx; - - // reference in forward - // T v1 = input[y_low * width + x_low]; - // T v2 = input[y_low * width + x_high]; - // T v3 = input[y_high * width + x_low]; - // T v4 = input[y_high * width + x_high]; - // T val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); - - w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx; - - return; -} - -template -inline void add(T* address, const T& val) { - *address += val; -} - -template -void ROIAlignBackward( - const int nthreads, - // may not be contiguous, and should be indexed using n_stride, etc - const T* grad_output, - const T& spatial_scale, - const int channels, - const int height, - const int width, - const int pooled_height, - const int pooled_width, - const int sampling_ratio, - T* grad_input, - const T* rois, - const int n_stride, - const int c_stride, - const int h_stride, - const int w_stride, - bool aligned) { - for (int index = 0; index < nthreads; index++) { - // (n, c, ph, pw) is an element in the pooled output - int pw = index % pooled_width; - int ph = (index / pooled_width) % pooled_height; - int c = (index / pooled_width / pooled_height) % channels; - int n = index / pooled_width / pooled_height / channels; - - const T* offset_rois = rois + n * 5; - int roi_batch_ind = offset_rois[0]; - - // Do not use rounding; this implementation detail is critical - T offset = aligned ? (T)0.5 : (T)0.0; - T roi_start_w = offset_rois[1] * spatial_scale - offset; - T roi_start_h = offset_rois[2] * spatial_scale - offset; - T roi_end_w = offset_rois[3] * spatial_scale - offset; - T roi_end_h = offset_rois[4] * spatial_scale - offset; - - T roi_width = roi_end_w - roi_start_w; - T roi_height = roi_end_h - roi_start_h; - if (aligned) { - AT_ASSERTM( - roi_width >= 0 && roi_height >= 0, - "ROIs in ROIAlign do not have non-negative size!"); - } else { // for backward-compatibility only - roi_width = std::max(roi_width, (T)1.); - roi_height = std::max(roi_height, (T)1.); - } - T bin_size_h = static_cast(roi_height) / static_cast(pooled_height); - T bin_size_w = static_cast(roi_width) / static_cast(pooled_width); - - T* offset_grad_input = - grad_input + ((roi_batch_ind * channels + c) * height * width); - - int output_offset = n * n_stride + c * c_stride; - const T* offset_grad_output = grad_output + output_offset; - const T grad_output_this_bin = - offset_grad_output[ph * h_stride + pw * w_stride]; - - // We use roi_bin_grid to sample the grid and mimic integral - int roi_bin_grid_h = (sampling_ratio > 0) - ? sampling_ratio - : ceil(roi_height / pooled_height); // e.g., = 2 - int roi_bin_grid_w = - (sampling_ratio > 0) ? sampling_ratio : ceil(roi_width / pooled_width); - - // We do average (integral) pooling inside a bin - const T count = roi_bin_grid_h * roi_bin_grid_w; // e.g. = 4 - - for (int iy = 0; iy < roi_bin_grid_h; iy++) { - const T y = roi_start_h + ph * bin_size_h + - static_cast(iy + .5f) * bin_size_h / - static_cast(roi_bin_grid_h); // e.g., 0.5, 1.5 - for (int ix = 0; ix < roi_bin_grid_w; ix++) { - const T x = roi_start_w + pw * bin_size_w + - static_cast(ix + .5f) * bin_size_w / - static_cast(roi_bin_grid_w); - - T w1, w2, w3, w4; - int x_low, x_high, y_low, y_high; - - bilinear_interpolate_gradient( - height, - width, - y, - x, - w1, - w2, - w3, - w4, - x_low, - x_high, - y_low, - y_high, - index); - - T g1 = grad_output_this_bin * w1 / count; - T g2 = grad_output_this_bin * w2 / count; - T g3 = grad_output_this_bin * w3 / count; - T g4 = grad_output_this_bin * w4 / count; - - if (x_low >= 0 && x_high >= 0 && y_low >= 0 && y_high >= 0) { - // atomic add is not needed for now since it is single threaded - add(offset_grad_input + y_low * width + x_low, static_cast(g1)); - add(offset_grad_input + y_low * width + x_high, static_cast(g2)); - add(offset_grad_input + y_high * width + x_low, static_cast(g3)); - add(offset_grad_input + y_high * width + x_high, static_cast(g4)); - } // if - } // ix - } // iy - } // for -} // ROIAlignBackward - -} // namespace - -namespace detectron2 { - -at::Tensor ROIAlign_forward_cpu( - const at::Tensor& input, - const at::Tensor& rois, - const float spatial_scale, - const int pooled_height, - const int pooled_width, - const int sampling_ratio, - bool aligned) { - AT_ASSERTM(input.device().is_cpu(), "input must be a CPU tensor"); - AT_ASSERTM(rois.device().is_cpu(), "rois must be a CPU tensor"); - - at::TensorArg input_t{input, "input", 1}, rois_t{rois, "rois", 2}; - - at::CheckedFrom c = "ROIAlign_forward_cpu"; - at::checkAllSameType(c, {input_t, rois_t}); - - auto num_rois = rois.size(0); - auto channels = input.size(1); - auto height = input.size(2); - auto width = input.size(3); - - at::Tensor output = at::zeros( - {num_rois, channels, pooled_height, pooled_width}, input.options()); - - auto output_size = num_rois * pooled_height * pooled_width * channels; - - if (output.numel() == 0) - return output; - - auto input_ = input.contiguous(), rois_ = rois.contiguous(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input.scalar_type(), "ROIAlign_forward", [&] { - ROIAlignForward( - output_size, - input_.data_ptr(), - spatial_scale, - channels, - height, - width, - pooled_height, - pooled_width, - sampling_ratio, - rois_.data_ptr(), - output.data_ptr(), - aligned); - }); - return output; -} - -at::Tensor ROIAlign_backward_cpu( - const at::Tensor& grad, - const at::Tensor& rois, - const float spatial_scale, - const int pooled_height, - const int pooled_width, - const int batch_size, - const int channels, - const int height, - const int width, - const int sampling_ratio, - bool aligned) { - AT_ASSERTM(grad.device().is_cpu(), "grad must be a CPU tensor"); - AT_ASSERTM(rois.device().is_cpu(), "rois must be a CPU tensor"); - - at::TensorArg grad_t{grad, "grad", 1}, rois_t{rois, "rois", 2}; - - at::CheckedFrom c = "ROIAlign_backward_cpu"; - at::checkAllSameType(c, {grad_t, rois_t}); - - at::Tensor grad_input = - at::zeros({batch_size, channels, height, width}, grad.options()); - - // handle possibly empty gradients - if (grad.numel() == 0) { - return grad_input; - } - - // get stride values to ensure indexing into gradients is correct. - int n_stride = grad.stride(0); - int c_stride = grad.stride(1); - int h_stride = grad.stride(2); - int w_stride = grad.stride(3); - - auto rois_ = rois.contiguous(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - grad.scalar_type(), "ROIAlign_forward", [&] { - ROIAlignBackward( - grad.numel(), - grad.data_ptr(), - spatial_scale, - channels, - height, - width, - pooled_height, - pooled_width, - sampling_ratio, - grad_input.data_ptr(), - rois_.data_ptr(), - n_stride, - c_stride, - h_stride, - w_stride, - aligned); - }); - return grad_input; -} - -} // namespace detectron2 diff --git a/spaces/hbestm/gpt-academic-play/request_llm/bridge_jittorllms_pangualpha.py b/spaces/hbestm/gpt-academic-play/request_llm/bridge_jittorllms_pangualpha.py deleted file mode 100644 index ad02565aef75ac056e0daa7396fb1c6ad7aae072..0000000000000000000000000000000000000000 --- a/spaces/hbestm/gpt-academic-play/request_llm/bridge_jittorllms_pangualpha.py +++ /dev/null @@ -1,178 +0,0 @@ - -from transformers import AutoModel, AutoTokenizer -import time -import threading -import importlib -from toolbox import update_ui, get_conf -from multiprocessing import Process, Pipe - -load_message = "jittorllms尚未加载,加载需要一段时间。注意,请避免混用多种jittor模型,否则可能导致显存溢出而造成卡顿,取决于`config.py`的配置,jittorllms消耗大量的内存(CPU)或显存(GPU),也许会导致低配计算机卡死 ……" - -################################################################################# -class GetGLMHandle(Process): - def __init__(self): - super().__init__(daemon=True) - self.parent, self.child = Pipe() - self.jittorllms_model = None - self.info = "" - self.local_history = [] - self.success = True - self.check_dependency() - self.start() - self.threadLock = threading.Lock() - - def check_dependency(self): - try: - import pandas - self.info = "依赖检测通过" - self.success = True - except: - from toolbox import trimmed_format_exc - self.info = r"缺少jittorllms的依赖,如果要使用jittorllms,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_jittorllms.txt -i https://pypi.jittor.org/simple -I`"+\ - r"和`git clone https://gitlink.org.cn/jittor/JittorLLMs.git --depth 1 request_llm/jittorllms`两个指令来安装jittorllms的依赖(在项目根目录运行这两个指令)。" +\ - r"警告:安装jittorllms依赖后将完全破坏现有的pytorch环境,建议使用docker环境!" + trimmed_format_exc() - self.success = False - - def ready(self): - return self.jittorllms_model is not None - - def run(self): - # 子进程执行 - # 第一次运行,加载参数 - def validate_path(): - import os, sys - dir_name = os.path.dirname(__file__) - env = os.environ.get("PATH", "") - os.environ["PATH"] = env.replace('/cuda/bin', '/x/bin') - root_dir_assume = os.path.abspath(os.path.dirname(__file__) + '/..') - os.chdir(root_dir_assume + '/request_llm/jittorllms') - sys.path.append(root_dir_assume + '/request_llm/jittorllms') - validate_path() # validate path so you can run from base directory - - def load_model(): - import types - try: - if self.jittorllms_model is None: - device, = get_conf('LOCAL_MODEL_DEVICE') - from .jittorllms.models import get_model - # availabel_models = ["chatglm", "pangualpha", "llama", "chatrwkv"] - args_dict = {'model': 'pangualpha'} - print('self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict))') - self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict)) - print('done get model') - except: - self.child.send('[Local Message] Call jittorllms fail 不能正常加载jittorllms的参数。') - raise RuntimeError("不能正常加载jittorllms的参数!") - print('load_model') - load_model() - - # 进入任务等待状态 - print('进入任务等待状态') - while True: - # 进入任务等待状态 - kwargs = self.child.recv() - query = kwargs['query'] - history = kwargs['history'] - # 是否重置 - if len(self.local_history) > 0 and len(history)==0: - print('触发重置') - self.jittorllms_model.reset() - self.local_history.append(query) - - print('收到消息,开始请求') - try: - for response in self.jittorllms_model.stream_chat(query, history): - print(response) - self.child.send(response) - except: - from toolbox import trimmed_format_exc - print(trimmed_format_exc()) - self.child.send('[Local Message] Call jittorllms fail.') - # 请求处理结束,开始下一个循环 - self.child.send('[Finish]') - - def stream_chat(self, **kwargs): - # 主进程执行 - self.threadLock.acquire() - self.parent.send(kwargs) - while True: - res = self.parent.recv() - if res != '[Finish]': - yield res - else: - break - self.threadLock.release() - -global pangu_glm_handle -pangu_glm_handle = None -################################################################################# -def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): - """ - 多线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - global pangu_glm_handle - if pangu_glm_handle is None: - pangu_glm_handle = GetGLMHandle() - if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + pangu_glm_handle.info - if not pangu_glm_handle.success: - error = pangu_glm_handle.info - pangu_glm_handle = None - raise RuntimeError(error) - - # jittorllms 没有 sys_prompt 接口,因此把prompt加入 history - history_feedin = [] - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可 - response = "" - for response in pangu_glm_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - print(response) - if len(observe_window) >= 1: observe_window[0] = response - if len(observe_window) >= 2: - if (time.time()-observe_window[1]) > watch_dog_patience: - raise RuntimeError("程序终止。") - return response - - - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 单线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - chatbot.append((inputs, "")) - - global pangu_glm_handle - if pangu_glm_handle is None: - pangu_glm_handle = GetGLMHandle() - chatbot[-1] = (inputs, load_message + "\n\n" + pangu_glm_handle.info) - yield from update_ui(chatbot=chatbot, history=[]) - if not pangu_glm_handle.success: - pangu_glm_handle = None - return - - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] - - # 处理历史信息 - history_feedin = [] - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - # 开始接收jittorllms的回复 - response = "[Local Message]: 等待jittorllms响应中 ..." - for response in pangu_glm_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - chatbot[-1] = (inputs, response) - yield from update_ui(chatbot=chatbot, history=history) - - # 总结输出 - if response == "[Local Message]: 等待jittorllms响应中 ...": - response = "[Local Message]: jittorllms响应异常 ..." - history.extend([inputs, response]) - yield from update_ui(chatbot=chatbot, history=history) diff --git a/spaces/hirol/controlnetOverMask/README.md b/spaces/hirol/controlnetOverMask/README.md deleted file mode 100644 index 9f8369726e29f009a999d36fc5a3c182d1bb983e..0000000000000000000000000000000000000000 --- a/spaces/hirol/controlnetOverMask/README.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -title: ControlnetOverMask -emoji: 🌖 -colorFrom: gray -colorTo: blue -sdk: gradio -sdk_version: 3.28.2 -app_file: app.py -pinned: false -license: mit - -tags: - - jax-diffusers-event ---- - -# ControlnetOverMask -### Controlnet -### Inpainting -### Stable diffusion - -Use controlnet to generate sketches and characters, and keep the background unchanged. - -Usually when we generate a very perfect background image, we want to add image elements, but using controlnet directly will affect the original background. This project aims to add elements to the page while keeping the background unchanged, and can directly operate on the original background. - -Support skeletal character generation and sketch generation. - -Optimize an inpaint model for the general domain against the stablediffusion-inpaint model. - -models already upload huggingface model space. -- hirol/Any-inpainting -- hirol/control_any5_openpose - -### Two modes, openpose control and manuscript control -#### openpose control -Add character skeletons to a forest scene generated by SD, and keep the background unchanged to generate controllable characters - - - -#### manuscript control -Added manuscript houses to an SD generated forest scene - - -Added manuscript chairs to an SD generated snow scene - - diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task076_Fluo_N3DH_SIM.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task076_Fluo_N3DH_SIM.py deleted file mode 100644 index 435592c5d6f10f3c15fcbe16a140aa06eecf5a00..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task076_Fluo_N3DH_SIM.py +++ /dev/null @@ -1,312 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from multiprocessing import Pool -from multiprocessing.dummy import Pool - -import SimpleITK as sitk -import numpy as np -from batchgenerators.utilities.file_and_folder_operations import * -from skimage.io import imread -from skimage.io import imsave -from skimage.morphology import ball -from skimage.morphology import erosion -from skimage.transform import resize - -from nnunet.paths import nnUNet_raw_data -from nnunet.paths import preprocessing_output_dir - - -def load_bmp_convert_to_nifti_borders(img_file, lab_file, img_out_base, anno_out, spacing, border_thickness=0.7): - img = imread(img_file) - img_itk = sitk.GetImageFromArray(img.astype(np.float32)) - img_itk.SetSpacing(np.array(spacing)[::-1]) - sitk.WriteImage(img_itk, join(img_out_base + "_0000.nii.gz")) - - if lab_file is not None: - l = imread(lab_file) - borders = generate_border_as_suggested_by_twollmann(l, spacing, border_thickness) - l[l > 0] = 1 - l[borders == 1] = 2 - l_itk = sitk.GetImageFromArray(l.astype(np.uint8)) - l_itk.SetSpacing(np.array(spacing)[::-1]) - sitk.WriteImage(l_itk, anno_out) - - -def generate_ball(spacing, radius, dtype=int): - radius_in_voxels = np.round(radius / np.array(spacing)).astype(int) - n = 2 * radius_in_voxels + 1 - ball_iso = ball(max(n) * 2, dtype=np.float64) - ball_resampled = resize(ball_iso, n, 1, 'constant', 0, clip=True, anti_aliasing=False, preserve_range=True) - ball_resampled[ball_resampled > 0.5] = 1 - ball_resampled[ball_resampled <= 0.5] = 0 - return ball_resampled.astype(dtype) - - -def generate_border_as_suggested_by_twollmann(label_img: np.ndarray, spacing, border_thickness: float = 2) -> np.ndarray: - border = np.zeros_like(label_img) - selem = generate_ball(spacing, border_thickness) - for l in np.unique(label_img): - if l == 0: continue - mask = (label_img == l).astype(int) - eroded = erosion(mask, selem) - border[(eroded == 0) & (mask != 0)] = 1 - return border - - -def find_differences(labelstr1, labelstr2): - for n in subfiles(labelstr1, suffix='.nii.gz', join=False): - a = sitk.GetArrayFromImage(sitk.ReadImage(join(labelstr1, n))) - b = sitk.GetArrayFromImage(sitk.ReadImage(join(labelstr2, n))) - print(n, np.sum(a != b)) - - -def prepare_task(base, task_id, task_name, spacing, border_thickness: float = 15, processes: int = 16): - p = Pool(processes) - - foldername = "Task%03.0d_%s" % (task_id, task_name) - - out_base = join(nnUNet_raw_data, foldername) - imagestr = join(out_base, "imagesTr") - imagests = join(out_base, "imagesTs") - labelstr = join(out_base, "labelsTr") - maybe_mkdir_p(imagestr) - maybe_mkdir_p(imagests) - maybe_mkdir_p(labelstr) - - train_patient_names = [] - test_patient_names = [] - res = [] - - for train_sequence in [i for i in subfolders(base + "_train", join=False) if not i.endswith("_GT")]: - train_cases = subfiles(join(base + '_train', train_sequence), suffix=".tif", join=False) - for t in train_cases: - casename = train_sequence + "_" + t[:-4] - img_file = join(base + '_train', train_sequence, t) - lab_file = join(base + '_train', train_sequence + "_GT", "SEG", "man_seg" + t[1:]) - if not isfile(lab_file): - continue - img_out_base = join(imagestr, casename) - anno_out = join(labelstr, casename + ".nii.gz") - res.append( - p.starmap_async(load_bmp_convert_to_nifti_borders, ((img_file, lab_file, img_out_base, anno_out, spacing, border_thickness),))) - train_patient_names.append(casename) - - for test_sequence in [i for i in subfolders(base + "_test", join=False) if not i.endswith("_GT")]: - test_cases = subfiles(join(base + '_test', test_sequence), suffix=".tif", join=False) - for t in test_cases: - casename = test_sequence + "_" + t[:-4] - img_file = join(base + '_test', test_sequence, t) - lab_file = None - img_out_base = join(imagests, casename) - anno_out = None - res.append( - p.starmap_async(load_bmp_convert_to_nifti_borders, ((img_file, lab_file, img_out_base, anno_out, spacing, border_thickness),))) - test_patient_names.append(casename) - - _ = [i.get() for i in res] - - json_dict = {} - json_dict['name'] = task_name - json_dict['description'] = "" - json_dict['tensorImageSize'] = "4D" - json_dict['reference'] = "" - json_dict['licence'] = "" - json_dict['release'] = "0.0" - json_dict['modality'] = { - "0": "BF", - } - json_dict['labels'] = { - "0": "background", - "1": "cell", - "2": "border", - } - - json_dict['numTraining'] = len(train_patient_names) - json_dict['numTest'] = len(test_patient_names) - json_dict['training'] = [{'image': "./imagesTr/%s.nii.gz" % i, "label": "./labelsTr/%s.nii.gz" % i} for i in - train_patient_names] - json_dict['test'] = ["./imagesTs/%s.nii.gz" % i for i in test_patient_names] - - save_json(json_dict, os.path.join(out_base, "dataset.json")) - p.close() - p.join() - - -def plot_images(folder, output_folder): - maybe_mkdir_p(output_folder) - import matplotlib.pyplot as plt - for i in subfiles(folder, suffix='.nii.gz', join=False): - img = sitk.GetArrayFromImage(sitk.ReadImage(join(folder, i))) - center_slice = img[img.shape[0]//2] - plt.imsave(join(output_folder, i[:-7] + '.png'), center_slice) - - -def convert_to_tiff(nifti_image: str, output_name: str): - npy = sitk.GetArrayFromImage(sitk.ReadImage(nifti_image)) - imsave(output_name, npy.astype(np.uint16), compress=6) - - -def convert_to_instance_seg(arr: np.ndarray, spacing: tuple = (0.2, 0.125, 0.125)): - from skimage.morphology import label, dilation - # 1 is core, 2 is border - objects = label((arr == 1).astype(int)) - final = np.copy(objects) - remaining_border = arr == 2 - current = np.copy(objects) - dilated_mm = np.array((0, 0, 0)) - spacing = np.array(spacing) - - while np.sum(remaining_border) > 0: - strel_size = [0, 0, 0] - maximum_dilation = max(dilated_mm) - for i in range(3): - if spacing[i] == min(spacing): - strel_size[i] = 1 - continue - if dilated_mm[i] + spacing[i] / 2 < maximum_dilation: - strel_size[i] = 1 - ball_here = ball(1) - - if strel_size[0] == 0: ball_here = ball_here[1:2] - if strel_size[1] == 0: ball_here = ball_here[:, 1:2] - if strel_size[2] == 0: ball_here = ball_here[:, :, 1:2] - - #print(1) - dilated = dilation(current, ball_here) - diff = (current == 0) & (dilated != current) - final[diff & remaining_border] = dilated[diff & remaining_border] - remaining_border[diff] = 0 - current = dilated - dilated_mm = [dilated_mm[i] + spacing[i] if strel_size[i] == 1 else dilated_mm[i] for i in range(3)] - return final.astype(np.uint32) - - -def convert_to_instance_seg2(arr: np.ndarray, spacing: tuple = (0.2, 0.125, 0.125), small_center_threshold=30, - isolated_border_as_separate_instance_threshold: int = 15): - from skimage.morphology import label, dilation - # we first identify centers that are too small and set them to be border. This should remove false positive instances - objects = label((arr == 1).astype(int)) - for o in np.unique(objects): - if o > 0 and np.sum(objects == o) <= small_center_threshold: - arr[objects == o] = 2 - - # 1 is core, 2 is border - objects = label((arr == 1).astype(int)) - final = np.copy(objects) - remaining_border = arr == 2 - current = np.copy(objects) - dilated_mm = np.array((0, 0, 0)) - spacing = np.array(spacing) - - while np.sum(remaining_border) > 0: - strel_size = [0, 0, 0] - maximum_dilation = max(dilated_mm) - for i in range(3): - if spacing[i] == min(spacing): - strel_size[i] = 1 - continue - if dilated_mm[i] + spacing[i] / 2 < maximum_dilation: - strel_size[i] = 1 - ball_here = ball(1) - - if strel_size[0] == 0: ball_here = ball_here[1:2] - if strel_size[1] == 0: ball_here = ball_here[:, 1:2] - if strel_size[2] == 0: ball_here = ball_here[:, :, 1:2] - - #print(1) - dilated = dilation(current, ball_here) - diff = (current == 0) & (dilated != current) - final[diff & remaining_border] = dilated[diff & remaining_border] - remaining_border[diff] = 0 - current = dilated - dilated_mm = [dilated_mm[i] + spacing[i] if strel_size[i] == 1 else dilated_mm[i] for i in range(3)] - - # what can happen is that a cell is so small that the network only predicted border and no core. This cell will be - # fused with the nearest other instance, which we don't want. Therefore we identify isolated border predictions and - # give them a separate instance id - # we identify isolated border predictions by checking each foreground object in arr and see whether this object - # also contains label 1 - max_label = np.max(final) - - foreground_objects = label((arr != 0).astype(int)) - for i in np.unique(foreground_objects): - if i > 0 and (1 not in np.unique(arr[foreground_objects==i])): - size_of_object = np.sum(foreground_objects==i) - if size_of_object >= isolated_border_as_separate_instance_threshold: - final[foreground_objects == i] = max_label + 1 - max_label += 1 - #print('yeah boi') - - return final.astype(np.uint32) - - -def load_instanceseg_save(in_file: str, out_file:str, better: bool): - itk_img = sitk.ReadImage(in_file) - if not better: - instanceseg = convert_to_instance_seg(sitk.GetArrayFromImage(itk_img)) - else: - instanceseg = convert_to_instance_seg2(sitk.GetArrayFromImage(itk_img)) - itk_out = sitk.GetImageFromArray(instanceseg) - itk_out.CopyInformation(itk_img) - sitk.WriteImage(itk_out, out_file) - - -def convert_all_to_instance(input_folder: str, output_folder: str, processes: int = 24, better: bool = False): - maybe_mkdir_p(output_folder) - p = Pool(processes) - files = subfiles(input_folder, suffix='.nii.gz', join=False) - output_files = [join(output_folder, i) for i in files] - input_files = [join(input_folder, i) for i in files] - better = [better] * len(files) - r = p.starmap_async(load_instanceseg_save, zip(input_files, output_files, better)) - _ = r.get() - p.close() - p.join() - - -if __name__ == "__main__": - base = "/home/fabian/data/Fluo-N3DH-SIM" - task_id = 76 - task_name = 'Fluo_N3DH_SIM' - spacing = (0.2, 0.125, 0.125) - border_thickness = 0.5 - - prepare_task(base, task_id, task_name, spacing, border_thickness, 12) - - # we need custom splits - task_name = "Task076_Fluo_N3DH_SIM" - labelsTr = join(nnUNet_raw_data, task_name, "labelsTr") - cases = subfiles(labelsTr, suffix='.nii.gz', join=False) - splits = [] - splits.append( - {'train': [i[:-7] for i in cases if i.startswith('01_')], - 'val': [i[:-7] for i in cases if i.startswith('02_')]} - ) - splits.append( - {'train': [i[:-7] for i in cases if i.startswith('02_')], - 'val': [i[:-7] for i in cases if i.startswith('01_')]} - ) - - maybe_mkdir_p(join(preprocessing_output_dir, task_name)) - - save_pickle(splits, join(preprocessing_output_dir, task_name, "splits_final.pkl")) - - # test set was converted to instance seg with convert_all_to_instance with better=True - - # convert to tiff with convert_to_tiff - - - diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/cascade/nnUNetTrainerV2CascadeFullRes_shorter.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/cascade/nnUNetTrainerV2CascadeFullRes_shorter.py deleted file mode 100644 index 26da3d5a294daecdc5e1040cb1f24286ad11fb1d..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/cascade/nnUNetTrainerV2CascadeFullRes_shorter.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from nnunet.training.network_training.nnUNetTrainerV2_CascadeFullRes import nnUNetTrainerV2CascadeFullRes - - -class nnUNetTrainerV2CascadeFullRes_shorter(nnUNetTrainerV2CascadeFullRes): - def __init__(self, plans_file, fold, output_folder=None, dataset_directory=None, batch_dice=True, stage=None, - unpack_data=True, deterministic=True, previous_trainer="nnUNetTrainerV2", fp16=False): - super().__init__(plans_file, fold, output_folder, dataset_directory, - batch_dice, stage, unpack_data, deterministic, - previous_trainer, fp16) - self.max_num_epochs = 500 diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/scripts_new/run_glacier_mtl_early_boundary_2.sh b/spaces/ho11laqe/nnUNet_calvingfront_detection/scripts_new/run_glacier_mtl_early_boundary_2.sh deleted file mode 100644 index 2c2dc829bae31c2a39b2792cd98f19012309e0be..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/scripts_new/run_glacier_mtl_early_boundary_2.sh +++ /dev/null @@ -1,20 +0,0 @@ -#!/bin/bash -l -#SBATCH --nodes=1 --gres=gpu:1 --time=24:00:00 -#SBATCH --job-name=Task505_glacier_mtl_early_boundary_2 - -export data_raw="/home/woody/iwi5/iwi5039h/data_raw" -export nnUNet_raw_data_base="/home/woody/iwi5/iwi5039h/nnUNet_data/nnUNet_raw_data_base/" -export nnUNet_preprocessed="/home/woody/iwi5/iwi5039h/nnUNet_data/nnUNet_preprocessed/" -export RESULTS_FOLDER="/home/woody/iwi5/iwi5039h/nnUNet_data/RESULTS_FOLDER" - -cd nnunet_glacer -pwd -conda activate nnunet - -#python3 nnunet/dataset_conversion/Task504_Glacier_mtl_recon.py -data_percentage 100 -base $data_raw -#python3 nnunet/experiment_planning/nnUNet_plan_and_preprocess.py -t 504 -pl3d None -pl2d ExperimentPlanner2D_mtl - -python3 nnunet/run/run_training.py 2d nnUNetTrainerMTLearly_boundary 505 2 -p nnUNetPlans_mtl --disable_postprocessing_on_folds -python3 nnunet/inference/predict_simple.py -i $nnUNet_raw_data_base/nnUNet_raw_data/Task505_Glacier_mtl_boundary/imagesTs -o $RESULTS_FOLDER/test_predictions/Task505_Glacier_mtl_boundary/early/fold_2 -t 505 -m 2d -f 2 -p nnUNetPlans_mtl -tr nnUNetTrainerMTLearly_boundary -python3 nnunet/dataset_conversion/Task505_Glacier_mtl_recon_reverse.py -i $RESULTS_FOLDER/test_predictions/Task505_Glacier_mtl_boundary/early/fold_2 -python3 ./evaluate_nnUNet.py --predictions $RESULTS_FOLDER/test_predictions/Task505_Glacier_mtl_boundary/early/fold_2/pngs --labels_fronts $data_raw/fronts/test --labels_zones $data_raw/zones/test --sar_images $data_raw/sar_images/test diff --git a/spaces/huazhao/anime-remove-background/README.md b/spaces/huazhao/anime-remove-background/README.md deleted file mode 100644 index 1ba3cb5ea0e994e246d57b7d62b8aa5a6331901c..0000000000000000000000000000000000000000 --- a/spaces/huazhao/anime-remove-background/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Anime Remove Background -emoji: 🪄🖼️ -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: skytnt/anime-remove-background ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/hugggof/vampnet/vampnet/beats.py b/spaces/hugggof/vampnet/vampnet/beats.py deleted file mode 100644 index 2b03a4e3df705a059cd34e6e01a72752fc4d8a98..0000000000000000000000000000000000000000 --- a/spaces/hugggof/vampnet/vampnet/beats.py +++ /dev/null @@ -1,250 +0,0 @@ -import json -import logging -import warnings -from dataclasses import dataclass -from pathlib import Path -from typing import Any -from typing import List -from typing import Tuple -from typing import Union - -import librosa -import torch -import numpy as np -from audiotools import AudioSignal - - -logging.basicConfig(level=logging.INFO) - -################### -# beat sync utils # -################### - -AGGREGATOR_REGISTRY = { - "mean": np.mean, - "median": np.median, - "max": np.max, - "min": np.min, -} - - -def list_aggregators() -> list: - return list(AGGREGATOR_REGISTRY.keys()) - - -@dataclass -class TimeSegment: - start: float - end: float - - @property - def duration(self): - return self.end - self.start - - def __str__(self) -> str: - return f"{self.start} - {self.end}" - - def find_overlapping_segment( - self, segments: List["TimeSegment"] - ) -> Union["TimeSegment", None]: - """Find the first segment that overlaps with this segment, or None if no segment overlaps""" - for s in segments: - if s.start <= self.start and s.end >= self.end: - return s - return None - - -def mkdir(path: Union[Path, str]) -> Path: - p = Path(path) - p.mkdir(parents=True, exist_ok=True) - return p - - - -################### -# beat data # -################### -@dataclass -class BeatSegment(TimeSegment): - downbeat: bool = False # if there's a downbeat on the start_time - - -class Beats: - def __init__(self, beat_times, downbeat_times): - if isinstance(beat_times, np.ndarray): - beat_times = beat_times.tolist() - if isinstance(downbeat_times, np.ndarray): - downbeat_times = downbeat_times.tolist() - self._beat_times = beat_times - self._downbeat_times = downbeat_times - self._use_downbeats = False - - def use_downbeats(self, use_downbeats: bool = True): - """use downbeats instead of beats when calling beat_times""" - self._use_downbeats = use_downbeats - - def beat_segments(self, signal: AudioSignal) -> List[BeatSegment]: - """ - segments a song into time segments corresponding to beats. - the first segment starts at 0 and ends at the first beat time. - the last segment starts at the last beat time and ends at the end of the song. - """ - beat_times = self._beat_times.copy() - downbeat_times = self._downbeat_times - beat_times.insert(0, 0) - beat_times.append(signal.signal_duration) - - downbeat_ids = np.intersect1d(beat_times, downbeat_times, return_indices=True)[ - 1 - ] - is_downbeat = [ - True if i in downbeat_ids else False for i in range(len(beat_times)) - ] - segments = [ - BeatSegment(start_time, end_time, downbeat) - for start_time, end_time, downbeat in zip( - beat_times[:-1], beat_times[1:], is_downbeat - ) - ] - return segments - - def get_beats(self) -> np.ndarray: - """returns an array of beat times, in seconds - if downbeats is True, returns an array of downbeat times, in seconds - """ - return np.array( - self._downbeat_times if self._use_downbeats else self._beat_times - ) - - @property - def beat_times(self) -> np.ndarray: - """return beat times""" - return np.array(self._beat_times) - - @property - def downbeat_times(self) -> np.ndarray: - """return downbeat times""" - return np.array(self._downbeat_times) - - def beat_times_to_feature_frames( - self, signal: AudioSignal, features: np.ndarray - ) -> np.ndarray: - """convert beat times to frames, given an array of time-varying features""" - beat_times = self.get_beats() - beat_frames = ( - beat_times * signal.sample_rate / signal.signal_length * features.shape[-1] - ).astype(np.int64) - return beat_frames - - def sync_features( - self, feature_frames: np.ndarray, features: np.ndarray, aggregate="median" - ) -> np.ndarray: - """sync features to beats""" - if aggregate not in AGGREGATOR_REGISTRY: - raise ValueError(f"unknown aggregation method {aggregate}") - - return librosa.util.sync( - features, feature_frames, aggregate=AGGREGATOR_REGISTRY[aggregate] - ) - - def to_json(self) -> dict: - """return beats and downbeats as json""" - return { - "beats": self._beat_times, - "downbeats": self._downbeat_times, - "use_downbeats": self._use_downbeats, - } - - @classmethod - def from_dict(cls, data: dict): - """load beats and downbeats from json""" - inst = cls(data["beats"], data["downbeats"]) - inst.use_downbeats(data["use_downbeats"]) - return inst - - def save(self, output_dir: Path): - """save beats and downbeats to json""" - mkdir(output_dir) - with open(output_dir / "beats.json", "w") as f: - json.dump(self.to_json(), f) - - @classmethod - def load(cls, input_dir: Path): - """load beats and downbeats from json""" - beats_file = Path(input_dir) / "beats.json" - with open(beats_file, "r") as f: - data = json.load(f) - return cls.from_dict(data) - - -################### -# beat tracking # -################### - - -class BeatTracker: - def extract_beats(self, signal: AudioSignal) -> Tuple[np.ndarray, np.ndarray]: - """extract beats from an audio signal""" - raise NotImplementedError - - def __call__(self, signal: AudioSignal) -> Beats: - """extract beats from an audio signal - NOTE: if the first beat (and/or downbeat) is detected within the first 100ms of the audio, - it is discarded. This is to avoid empty bins with no beat synced features in the first beat. - Args: - signal (AudioSignal): signal to beat track - Returns: - Tuple[np.ndarray, np.ndarray]: beats and downbeats - """ - beats, downbeats = self.extract_beats(signal) - return Beats(beats, downbeats) - - -class WaveBeat(BeatTracker): - def __init__(self, ckpt_path: str = "checkpoints/wavebeat", device: str = "cpu"): - from wavebeat.dstcn import dsTCNModel - - model = dsTCNModel.load_from_checkpoint(ckpt_path, map_location=torch.device(device)) - model.eval() - - self.device = device - self.model = model - - def extract_beats(self, signal: AudioSignal) -> Tuple[np.ndarray, np.ndarray]: - """returns beat and downbeat times, in seconds""" - # extract beats - beats, downbeats = self.model.predict_beats_from_array( - audio=signal.audio_data.squeeze(0), - sr=signal.sample_rate, - use_gpu=self.device != "cpu", - ) - - return beats, downbeats - - -class MadmomBeats(BeatTracker): - def __init__(self): - raise NotImplementedError - - def extract_beats(self, signal: AudioSignal) -> Tuple[np.ndarray, np.ndarray]: - """returns beat and downbeat times, in seconds""" - pass - - -BEAT_TRACKER_REGISTRY = { - "wavebeat": WaveBeat, - "madmom": MadmomBeats, -} - - -def list_beat_trackers() -> list: - return list(BEAT_TRACKER_REGISTRY.keys()) - - -def load_beat_tracker(beat_tracker: str, **kwargs) -> BeatTracker: - if beat_tracker not in BEAT_TRACKER_REGISTRY: - raise ValueError( - f"Unknown beat tracker {beat_tracker}. Available: {list_beat_trackers()}" - ) - - return BEAT_TRACKER_REGISTRY[beat_tracker](**kwargs) \ No newline at end of file diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/ms1mv2_r100.py b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/ms1mv2_r100.py deleted file mode 100644 index 24fd0417f2219e63e91fdbc92c609ebc596cee21..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/ms1mv2_r100.py +++ /dev/null @@ -1,27 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.margin_list = (1.0, 0.5, 0.0) -config.network = "r100" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 1.0 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.1 -config.verbose = 2000 -config.dali = False - -config.rec = "/train_tmp/faces_emore" -config.num_classes = 85742 -config.num_image = 5822653 -config.num_epoch = 20 -config.warmup_epoch = 0 -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/iamironman4279/SadTalker/src/face3d/util/generate_list.py b/spaces/iamironman4279/SadTalker/src/face3d/util/generate_list.py deleted file mode 100644 index 943d906781063c3584a7e5b5c784f8aac0694985..0000000000000000000000000000000000000000 --- a/spaces/iamironman4279/SadTalker/src/face3d/util/generate_list.py +++ /dev/null @@ -1,34 +0,0 @@ -"""This script is to generate training list files for Deep3DFaceRecon_pytorch -""" - -import os - -# save path to training data -def write_list(lms_list, imgs_list, msks_list, mode='train',save_folder='datalist', save_name=''): - save_path = os.path.join(save_folder, mode) - if not os.path.isdir(save_path): - os.makedirs(save_path) - with open(os.path.join(save_path, save_name + 'landmarks.txt'), 'w') as fd: - fd.writelines([i + '\n' for i in lms_list]) - - with open(os.path.join(save_path, save_name + 'images.txt'), 'w') as fd: - fd.writelines([i + '\n' for i in imgs_list]) - - with open(os.path.join(save_path, save_name + 'masks.txt'), 'w') as fd: - fd.writelines([i + '\n' for i in msks_list]) - -# check if the path is valid -def check_list(rlms_list, rimgs_list, rmsks_list): - lms_list, imgs_list, msks_list = [], [], [] - for i in range(len(rlms_list)): - flag = 'false' - lm_path = rlms_list[i] - im_path = rimgs_list[i] - msk_path = rmsks_list[i] - if os.path.isfile(lm_path) and os.path.isfile(im_path) and os.path.isfile(msk_path): - flag = 'true' - lms_list.append(rlms_list[i]) - imgs_list.append(rimgs_list[i]) - msks_list.append(rmsks_list[i]) - print(i, rlms_list[i], flag) - return lms_list, imgs_list, msks_list diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/AIO - All In One - Runtimes 2.4.1 Crack VERIFIED.md b/spaces/inplisQlawa/anything-midjourney-v4-1/AIO - All In One - Runtimes 2.4.1 Crack VERIFIED.md deleted file mode 100644 index e357184db567c6e22f34e1b9d9d2a5b530acd6a4..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/AIO - All In One - Runtimes 2.4.1 Crack VERIFIED.md +++ /dev/null @@ -1,6 +0,0 @@ -

              AIO - All in One - Runtimes 2.4.1 crack


              Download Ziphttps://urlin.us/2uEwhF



              - -kuyhAa.Me -All in One Runtimes 2.4.8 Terbaru merupakan kumpulan software atau program pendukung ketika windows ada berusaha ... 4d29de3e1b
              -
              -
              -

              diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Ab Bulk Mailer 8 5 License Ndb Decommissioning.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Ab Bulk Mailer 8 5 License Ndb Decommissioning.md deleted file mode 100644 index b22be5948d4b63083c40322a7d7bdbf261e83f1b..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Ab Bulk Mailer 8 5 License Ndb Decommissioning.md +++ /dev/null @@ -1,6 +0,0 @@ -
              -

              In June 2017, the Department of Health settled an enforcement action brought by the New York State Attorney General’s Office, agreeing to assess a total of nearly $1.7 million in penalties and fees against Holtec. The Department was also directed to send Holtec’s payments to the New York State Comptroller for deposit in a Nuclear decommissioning fund and account. Holtec agreed to provide quarterly reports on activities at Indian Point for this year and the following three years.

              -

              The Indian Point NRC license was issued by the US Nuclear Regulatory Commission (NRC) in 1994 and renewed in 2010 and 2014. New York state intervened on Indian Point on June 28, 2013, because the NRC has failed to consider the risks of a post-decommissioning nuclear waste repository at Indian Point and the surrounding area. State officials have prepared a plan for dealing with the waste from the site. The plan included a formal letter of objection to the license transfer. Following the 2015 license transfer, NY officials opposed the transfer on safety grounds. The New York State Court of Appeals intervened, claiming that the NRC improperly delegated its responsibility to New York State and the court has questioned the government s motives for license transfer. In April 2017, the NRC denied NY s request to hold a contested case hearing. The NRC has ignored significant safety, financial and engineering issues. NY and the court are fighting to turn back the license transfer. Holtec is undeterred. Last week, in an action that was not informed to the court, the New York State Department of Health suspended the license for daily patient access to Indian Point by workers and patients to minimize the risk of radiation exposure and to reduce the spread of the Covid-19 pandemic. This disruption was evidently costly because the suspension is still in place and worker access is limited to primarily freight delivery and collection services.

              -

              Ab Bulk Mailer 8 5 License Ndb Decommissioning


              Download Filehttps://urlin.us/2uEvCh



              899543212b
              -
              -
              \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Axioo Fw01 Driver Download !NEW!.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Axioo Fw01 Driver Download !NEW!.md deleted file mode 100644 index 2a6b0ff1afb253d0d44dee115d65c81f69fc0a27..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Axioo Fw01 Driver Download !NEW!.md +++ /dev/null @@ -1,11 +0,0 @@ -

              axioo fw01 driver download


              DOWNLOADhttps://urlin.us/2uExjL



              - -axioo fw01 driver -About the application Show your bonus card with every order in our establishment and get points. -For the accumulated points you can get nice gifts! -You can receive your first gift on your next visit. -About us The network of Japanese restaurants "Eurasia" invites you to visit restaurants and enjoy Japanese cuisine in the city of St. Petersburg. -Our restaurants are created 8a78ff9644
              -
              -
              -

              diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/DAEMON Tools Ultra 5.7.0 Crack With Key [Full Version].md b/spaces/inplisQlawa/anything-midjourney-v4-1/DAEMON Tools Ultra 5.7.0 Crack With Key [Full Version].md deleted file mode 100644 index d1b0473d00d66fd0a3f67f6b0e5e0bebf8565653..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/DAEMON Tools Ultra 5.7.0 Crack With Key [Full Version].md +++ /dev/null @@ -1,30 +0,0 @@ -
              -Here is what I created: - -

              DAEMON Tools Ultra 5.7.0 Crack With Key [Full Version]

              -

              DAEMON Tools Ultra is a powerful and versatile software that allows you to create, mount, and manage virtual drives and disc images. With DAEMON Tools Ultra, you can create bootable USB sticks, virtual hard disks, and RAM disks, as well as emulate various types of optical drives and discs.

              -

              DAEMON Tools Ultra 5.7.0 Crack With Key [Full Version]


              Download File > https://urlin.us/2uEwzh



              -

              If you want to enjoy the full features of DAEMON Tools Ultra, you need to activate it with a valid license key. However, some people may try to use a cracked version of the software, which can expose them to various risks and problems. In this article, we will explain why you should avoid using DAEMON Tools Ultra 5.7.0 crack with key, and how you can get a legitimate copy of the software.

              -

              Why You Should Avoid Using DAEMON Tools Ultra 5.7.0 Crack With Key

              -

              Using a cracked version of DAEMON Tools Ultra may seem like a tempting option, especially if you don't want to pay for the software. However, there are many reasons why you should avoid doing so, such as:

              -
                -
              • It is illegal. Cracking software is a form of piracy, which violates the intellectual property rights of the developers and distributors of the software. By using a cracked version of DAEMON Tools Ultra, you are breaking the law and risking legal consequences.
              • -
              • It is unsafe. Cracked software often comes from untrusted sources, such as torrent sites or shady websites. These sources may contain malware, viruses, spyware, or other harmful programs that can infect your computer and compromise your security and privacy. You may also expose your personal data and sensitive information to hackers and cybercriminals.
              • -
              • It is unreliable. Cracked software may not work properly or at all, as it may have errors, bugs, or compatibility issues. You may also miss out on important updates and patches that fix bugs and improve performance and stability. Moreover, you may not be able to access customer support or technical assistance if you encounter any problems with the software.
              • -
              • It is unethical. Cracking software is a form of stealing, which harms the developers and distributors of the software who invest time, money, and effort into creating and maintaining it. By using a cracked version of DAEMON Tools Ultra, you are depriving them of their rightful income and discouraging them from developing more quality products in the future.
              • -
              -

              How You Can Get a Legitimate Copy of DAEMON Tools Ultra

              -

              If you want to use DAEMON Tools Ultra without any risks or problems, you should get a legitimate copy of the software from the official website: https://www.daemon-tools.cc/products/dtultra.

              -

              On the website, you can choose from different plans and pricing options that suit your needs and budget. You can also download a free trial version of the software that lets you test its features for 14 days.

              -

              -

              By getting a legitimate copy of DAEMON Tools Ultra, you can enjoy the following benefits:

              -
                -
              • It is legal. You will have a valid license key that proves your ownership and authorization to use the software. You will not violate any laws or regulations by using the software.
              • -
              • It is safe. You will download the software from a trusted source that guarantees its quality and security. You will not expose your computer or data to any malware or threats by using the software.
              • -
              • It is reliable. You will get the latest version of the software that works smoothly and efficiently. You will also receive regular updates and patches that fix bugs and improve performance and stability. Moreover, you will be able to access customer support and technical assistance if you encounter any problems with the software.
              • -
              • It is ethical. You will support the developers and distributors of the software who deserve to be rewarded for their work and innovation. You will also encourage them to continue creating and improving more quality products in the future.
              • -
              -

              Conclusion

              -

              DAEMON Tools Ultra is a great software that allows you to create, mount

              d5da3c52bf
              -
              -
              \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Hindi Laghu Natika Script [VERIFIED] Download Pdf.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Hindi Laghu Natika Script [VERIFIED] Download Pdf.md deleted file mode 100644 index f65029fce5fcb667de737a74752fffc8f463b00e..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Hindi Laghu Natika Script [VERIFIED] Download Pdf.md +++ /dev/null @@ -1,6 +0,0 @@ -

              Hindi Laghu Natika Script Download Pdf


              DOWNLOAD 🆗 https://urlin.us/2uEwN9



              - -Hindi laghu natika script download ... Jain books pdf and references jaina-jainlink. ... Navya laghu natak (नव्य लघु नाटक), sunil khanna | download. 4d29de3e1b
              -
              -
              -

              diff --git a/spaces/inreVtussa/clothingai/Examples/Autodesk 3ds Max 2014 Crack Torrent BEST.md b/spaces/inreVtussa/clothingai/Examples/Autodesk 3ds Max 2014 Crack Torrent BEST.md deleted file mode 100644 index f929f416b463b43a42f846111116710064943481..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Autodesk 3ds Max 2014 Crack Torrent BEST.md +++ /dev/null @@ -1,9 +0,0 @@ -
              -

              if you still have trouble after using these steps, you can always try to use different software which is more compatible with your pc. for example, you can try to use a free software such as serial number generatorfrom http://assetgeneratorz.com/ . asset generator is a powerful software used to generate serial numbers, product keys, and license keys for anti-virus, instant messaging, video editors, music editors, etc. this software supports all versions of windows os, including windows 7, windows 8, windows 10, etc.generate serial numbers, product keys, and license keys for anti-virus, instant messaging, video editors, music editors, etc.

              -

              autodesk 3ds max 2014 crack torrent


              Download ✺✺✺ https://tiurll.com/2uCjwz



              -

              autodesk 3ds max 2014 crack serial key provides a comprehensive, integrated 3d modeling, animation, and rendering solution for game developers, visual effects artists, and graphic designers. it has grown to be one of the top 3d animation software options, focused on providing a powerful modeling architecture for graphic designers. the new version includes a host of new features and several clever improvements to the existing tools.

              -

              software registration is required to access autodesk and autodesk application network services. registration provides you with access to the benefits of the network such as technical support, automatic updates, free education, online tutorials and the ability to redeem a certificate for a free license to autodesk products.

              -

              autodesk uses different types of license keys depending on the version of autodesk software that you purchase. if you are using a single license for autodesk software as part of a product suite, you will be prompted to follow the instructions in that suite on how to obtain a license for the software as part of that suite.

              -

              899543212b
              -
              -
              \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Autodesk Autocad 2015 Keygen Fix Tor.md b/spaces/inreVtussa/clothingai/Examples/Autodesk Autocad 2015 Keygen Fix Tor.md deleted file mode 100644 index e1997118d5f16f533a3b8e10f3a5e3d9b41d1a0a..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Autodesk Autocad 2015 Keygen Fix Tor.md +++ /dev/null @@ -1,7 +0,0 @@ -
              -

              to set the font to unicode, go to the options dialog box, choose view, and click modify. in the modify font dialog box:
              in the select font dialog box, set the type to unicode and click ok. autocad will create the font file with the special "%c" character set in the font name, but for this release that character set does not appear in the font file.

              -

              Autodesk Autocad 2015 Keygen Tor


              Downloadhttps://tiurll.com/2uClzB



              -

              a primary benefit of unicode is that it uses a universal coding system with a unique string for every character ensuring consistency regardless of platform, software, or language. before unicode was supported by autocad, control codes were used for some of the most common drawing symbols including the diameter symbol in autocad. those control codes are still supported by autocad and can be used in single- and multi-line text. the autocad diameter symbol code is %%c. you may ask why not %%d well, the equally popular degree symbol gets that honor. visit the autocad help for a l ist of additional control codes.

              -

              the following is a listing of all of the autodesk applications:

              • autocad - autodesk's foremost application in design. use it to create drawings, 3d models, and animations.
              • autocad lt - the lightweight version of the application, offers some of the same functionality as autocad.
              • autocad classic - a legacy version that works under dos and windows 9x.
              • autocad 2009 - this is the last version of autocad to run on windows xp.
              • autocad 2010 - this is the first version of autocad to run on windows 7.
              • autocad lt 2010 - the lightweight version of autocad, the light engine of autocad, offers some of the same functionality as autocad.
              • autocad 2011 - this is the first version of autocad to run on windows 8.
              • autocad 2013 - this is the first version of autocad to run on windows 8.1.
              • autocad 2014 - this is the first version of autocad to run on windows 10.
              • autocad light table - this is the first version of autocad to run on mac os x.
              • autocad lt 2014 - the lightweight version of autocad, the light engine of autocad, offers some of the same functionality as autocad.
              • autocad classic 2013 - the legacy version of autocad, works under dos and windows 9x.
              • autocad classic 2010 - the legacy version of autocad, works under windows nt.
              • autocad lt 2009 - the lightweight version of autocad, the light engine of autocad, offers some of the same functionality as autocad.
              899543212b
              -
              -
              \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Conduct Certificate Format Tamil Nadu Pdf 80.md b/spaces/inreVtussa/clothingai/Examples/Conduct Certificate Format Tamil Nadu Pdf 80.md deleted file mode 100644 index 0316b87a2dbfb343fca766ba1b8ebe472ba90964..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Conduct Certificate Format Tamil Nadu Pdf 80.md +++ /dev/null @@ -1,6 +0,0 @@ -

              conduct certificate format tamil nadu pdf 80


              DOWNLOAD ››› https://tiurll.com/2uCkJF



              - -33-A Tamil Nadu State Co-operative Societies Election Commission. 34. ... society. 150. Powers of Registrar to issue certificate for recovery of sums due ... resolution to conduct the affairs of each of the new societies for a period of three months from ... any form whatsoever in the assets or profits of the registered society. 23. 4d29de3e1b
              -
              -
              -

              diff --git a/spaces/ismot/1702t1/utils/logger.py b/spaces/ismot/1702t1/utils/logger.py deleted file mode 100644 index 0f2e4dc66099c7e4784e37ab924e8594ffa03e27..0000000000000000000000000000000000000000 --- a/spaces/ismot/1702t1/utils/logger.py +++ /dev/null @@ -1,49 +0,0 @@ -""" -@Date: 2021/07/17 -@description: -""" -import os -import sys -import logging -import functools -from termcolor import colored - - -def build_logger(config): - output_dir = config.LOGGER.DIR - local_rank = config.LOCAL_RANK - name = config.MODEL.NAME - logger = get_logger(output_dir, local_rank, name) - return logger - - -@functools.lru_cache() -def get_logger(output_dir=None, local_rank=None, name="PLTNet"): - if output_dir and not os.path.exists(output_dir): - os.makedirs(output_dir) - - # create logger - logger = logging.getLogger(name) - logger.setLevel(logging.DEBUG) - logger.propagate = False - - # create formatter - fmt = f'[%(asctime)s %(name)s][%(levelname)1.1s](%(filename)s %(lineno)d): %(message)s' - color_fmt = colored(f'[%(asctime)s %(name)s][%(levelname)1.1s][{local_rank}]', 'green') + colored( - f'(%(filename)s %(lineno)d)', - 'yellow') + ': %(message)s' - if local_rank in [0] or local_rank is None: - console_handler = logging.StreamHandler(sys.stdout) - console_handler.setLevel(logging.DEBUG) - console_handler.setFormatter( - logging.Formatter(fmt=color_fmt, datefmt='%Y-%m-%d %H:%M:%S')) - logger.addHandler(console_handler) - - if output_dir is not None: - # create file handlers - file_handler = logging.FileHandler(os.path.join(output_dir, f'log_rank{local_rank}.log'), mode='a') - file_handler.setLevel(logging.DEBUG) - file_handler.setFormatter(logging.Formatter(fmt=fmt, datefmt='%Y-%m-%d %H:%M:%S')) - logger.addHandler(file_handler) - - return logger diff --git a/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/analysis/SuppleFig1a_Distribution.py b/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/analysis/SuppleFig1a_Distribution.py deleted file mode 100644 index 7f56675be57834afb1e72051cc8d21fc611d9960..0000000000000000000000000000000000000000 --- a/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/analysis/SuppleFig1a_Distribution.py +++ /dev/null @@ -1,61 +0,0 @@ -#!/usr/bin/python -# coding: utf-8 - -# Author: LE YUAN - -import csv -import json -import numpy as np -import pandas as pd -import seaborn as sns -import matplotlib.pyplot as plt -from matplotlib import rc - - -with open('../../Data/database/Kcat_combination_0918.json', 'r') as infile : - entries = json.load(infile) - -print(len(entries)) - -Kcat = [float(entry['Value']) for entry in entries] - -plt.figure(figsize=(3,3)) - -# To solve the 'Helvetica' font cannot be used in PDF file -# https://stackoverflow.com/questions/59845568/the-pdf-backend-does-not-currently-support-the-selected-font -rc('font',**{'family':'serif','serif':['Helvetica']}) -plt.rcParams['pdf.fonttype'] = 42 - -# plt.axes([0.12,0.12,0.83,0.83]) - -plt.tick_params(direction='in') -plt.tick_params(which='major',length=1.5) -plt.tick_params(which='major',width=0.4) - -plt.hist(Kcat,5000,color='#2166ac') -plt.xlabel('$k$$_\mathregular{cat}$ value', fontsize=7) -plt.ylabel('Counts', fontsize=7) - -plt.rcParams['font.family'] = 'Helvetica' - -# plt.xlim(0,500000) -# plt.xticks([0,10,100,1000,10000,100000]) - -ax = plt.gca() -ax.spines['bottom'].set_linewidth(0.5) -ax.spines['left'].set_linewidth(0.5) -ax.spines['top'].set_linewidth(0.5) -ax.spines['right'].set_linewidth(0.5) - -plt.yscale('log') -plt.xscale('log') - -plt.xticks(fontsize=6) -plt.yticks(fontsize=6) - -plt.tight_layout() - -plt.savefig("../../Results/figures/SuppleFig1a.pdf", dpi=400) - - - diff --git a/spaces/jiejiejie0420/bingo/src/components/theme-toggle.tsx b/spaces/jiejiejie0420/bingo/src/components/theme-toggle.tsx deleted file mode 100644 index 67d3f1a2c163ccbeb52c40a7e42f107190237154..0000000000000000000000000000000000000000 --- a/spaces/jiejiejie0420/bingo/src/components/theme-toggle.tsx +++ /dev/null @@ -1,31 +0,0 @@ -'use client' - -import * as React from 'react' -import { useTheme } from 'next-themes' - -import { Button } from '@/components/ui/button' -import { IconMoon, IconSun } from '@/components/ui/icons' - -export function ThemeToggle() { - const { setTheme, theme } = useTheme() - const [_, startTransition] = React.useTransition() - - return ( - - ) -} diff --git a/spaces/jmesikto/whisper-webui/app-local.py b/spaces/jmesikto/whisper-webui/app-local.py deleted file mode 100644 index c7717d096ca5f95177f0dba03cd62ca729bae9f3..0000000000000000000000000000000000000000 --- a/spaces/jmesikto/whisper-webui/app-local.py +++ /dev/null @@ -1,5 +0,0 @@ -# Run the app with no audio file restrictions -from app import create_ui -from src.config import ApplicationConfig - -create_ui(ApplicationConfig.create_default(input_audio_max_duration=-1)) \ No newline at end of file diff --git a/spaces/joaogabriellima/Real-Time-Voice-Cloning/encoder/data_objects/__init__.py b/spaces/joaogabriellima/Real-Time-Voice-Cloning/encoder/data_objects/__init__.py deleted file mode 100644 index ef04ade68544d0477a7f6deb4e7d51e97f592910..0000000000000000000000000000000000000000 --- a/spaces/joaogabriellima/Real-Time-Voice-Cloning/encoder/data_objects/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from encoder.data_objects.speaker_verification_dataset import SpeakerVerificationDataset -from encoder.data_objects.speaker_verification_dataset import SpeakerVerificationDataLoader diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/C_F_F_.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/C_F_F_.py deleted file mode 100644 index c231599e37b3a5864a774387d717baf297957876..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/C_F_F_.py +++ /dev/null @@ -1,46 +0,0 @@ -from io import BytesIO -from fontTools import cffLib -from . import DefaultTable - - -class table_C_F_F_(DefaultTable.DefaultTable): - def __init__(self, tag=None): - DefaultTable.DefaultTable.__init__(self, tag) - self.cff = cffLib.CFFFontSet() - self._gaveGlyphOrder = False - - def decompile(self, data, otFont): - self.cff.decompile(BytesIO(data), otFont, isCFF2=False) - assert len(self.cff) == 1, "can't deal with multi-font CFF tables." - - def compile(self, otFont): - f = BytesIO() - self.cff.compile(f, otFont, isCFF2=False) - return f.getvalue() - - def haveGlyphNames(self): - if hasattr(self.cff[self.cff.fontNames[0]], "ROS"): - return False # CID-keyed font - else: - return True - - def getGlyphOrder(self): - if self._gaveGlyphOrder: - from fontTools import ttLib - - raise ttLib.TTLibError("illegal use of getGlyphOrder()") - self._gaveGlyphOrder = True - return self.cff[self.cff.fontNames[0]].getGlyphOrder() - - def setGlyphOrder(self, glyphOrder): - pass - # XXX - # self.cff[self.cff.fontNames[0]].setGlyphOrder(glyphOrder) - - def toXML(self, writer, otFont): - self.cff.toXML(writer) - - def fromXML(self, name, attrs, content, otFont): - if not hasattr(self, "cff"): - self.cff = cffLib.CFFFontSet() - self.cff.fromXML(name, attrs, content, otFont) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ufoLib/pointPen.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ufoLib/pointPen.py deleted file mode 100644 index 3433fdbc96cc68505a999f20919387b0d2acf31f..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ufoLib/pointPen.py +++ /dev/null @@ -1,5 +0,0 @@ -"""DEPRECATED - This module is kept here only as a backward compatibility shim -for the old ufoLib.pointPen module, which was moved to fontTools.pens.pointPen. -Please use the latter instead. -""" -from fontTools.pens.pointPen import * diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/transaction.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/transaction.py deleted file mode 100644 index d0b5dc91990ac75b7165d36a384282249ee644ab..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/transaction.py +++ /dev/null @@ -1,81 +0,0 @@ -class Transaction: - """Filesystem transaction write context - - Gathers files for deferred commit or discard, so that several write - operations can be finalized semi-atomically. This works by having this - instance as the ``.transaction`` attribute of the given filesystem - """ - - def __init__(self, fs): - """ - Parameters - ---------- - fs: FileSystem instance - """ - self.fs = fs - self.files = [] - - def __enter__(self): - self.start() - - def __exit__(self, exc_type, exc_val, exc_tb): - """End transaction and commit, if exit is not due to exception""" - # only commit if there was no exception - self.complete(commit=exc_type is None) - self.fs._intrans = False - self.fs._transaction = None - - def start(self): - """Start a transaction on this FileSystem""" - self.files = [] # clean up after previous failed completions - self.fs._intrans = True - - def complete(self, commit=True): - """Finish transaction: commit or discard all deferred files""" - for f in self.files: - if commit: - f.commit() - else: - f.discard() - self.files = [] - self.fs._intrans = False - - -class FileActor: - def __init__(self): - self.files = [] - - def commit(self): - for f in self.files: - f.commit() - self.files.clear() - - def discard(self): - for f in self.files: - f.discard() - self.files.clear() - - def append(self, f): - self.files.append(f) - - -class DaskTransaction(Transaction): - def __init__(self, fs): - """ - Parameters - ---------- - fs: FileSystem instance - """ - import distributed - - super().__init__(fs) - client = distributed.default_client() - self.files = client.submit(FileActor, actor=True).result() - - def complete(self, commit=True): - """Finish transaction: commit or discard all deferred files""" - if commit: - self.files.commit().result() - else: - self.files.discard().result() - self.fs._intrans = False diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/query/query_transform.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/query/query_transform.py deleted file mode 100644 index ce6cc2bd4cb4194c37f72b2bc0ca0d4475c04a95..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/query/query_transform.py +++ /dev/null @@ -1,66 +0,0 @@ -"""Query transform.""" - -from typing import Optional - -from gpt_index.indices.query.schema import QueryBundle -from gpt_index.langchain_helpers.chain_wrapper import LLMPredictor -from gpt_index.prompts.base import Prompt -from gpt_index.prompts.default_prompts import DEFAULT_HYDE_PROMPT - - -class BaseQueryTransform: - """Base class for query transform. - - A query transform augments a raw query string with associated transformations - to improve index querying. - """ - - def __call__(self, query_str: str) -> QueryBundle: - """Run query processor.""" - return QueryBundle(query_str=query_str, custom_embedding_strs=[query_str]) - - -class HyDEQueryTransform(BaseQueryTransform): - """Hypothetical Document Embeddings (HyDE) query transform. - - It uses an LLM to generate hypothetical answer(s) to a given query, - and use the resulting documents as embedding strings. - - As described in `[Precise Zero-Shot Dense Retrieval without Relevance Labels] - (https://arxiv.org/abs/2212.10496)` - """ - - def __init__( - self, - llm_predictor: Optional[LLMPredictor] = None, - hyde_prompt: Optional[Prompt] = None, - include_original: bool = True, - ) -> None: - """Initialize HyDEQueryTransform. - - Args: - llm_predictor (Optional[LLMPredictor]): LLM for generating - hypothetical documents - hyde_prompt (Optional[Prompt]): Custom prompt for HyDE - include_original (bool): Whether to include original query - string as one of the embedding strings - """ - super().__init__() - - self._llm_predictor = llm_predictor or LLMPredictor() - self._hyde_prompt = hyde_prompt or DEFAULT_HYDE_PROMPT - self._include_original = include_original - - def __call__(self, query_str: str) -> QueryBundle: - """Run query transform.""" - # TODO: support generating multiple hypothetical docs - hypothetical_doc, _ = self._llm_predictor.predict( - self._hyde_prompt, context_str=query_str - ) - embedding_strs = [hypothetical_doc] - if self._include_original: - embedding_strs.append(query_str) - return QueryBundle( - query_str=query_str, - custom_embedding_strs=embedding_strs, - ) diff --git a/spaces/juancopi81/youtube-music-transcribe/t5x/configs/__init__.py b/spaces/juancopi81/youtube-music-transcribe/t5x/configs/__init__.py deleted file mode 100644 index 85dd7a38f30639b377a504c2c0295e2b8955cea9..0000000000000000000000000000000000000000 --- a/spaces/juancopi81/youtube-music-transcribe/t5x/configs/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright 2022 The T5X Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""This empty file is needed for loading the gin files in this directory.""" diff --git a/spaces/justest/gpt4free/g4f/.v1/gpt4free/quora/backup-mail.py b/spaces/justest/gpt4free/g4f/.v1/gpt4free/quora/backup-mail.py deleted file mode 100644 index 749149fd091f30fdae77d20c57cf6197d83874c9..0000000000000000000000000000000000000000 --- a/spaces/justest/gpt4free/g4f/.v1/gpt4free/quora/backup-mail.py +++ /dev/null @@ -1,45 +0,0 @@ -from json import loads -from re import findall -from time import sleep - -from requests import Session - - -class Mail: - def __init__(self) -> None: - self.client = Session() - self.client.post("https://etempmail.com/") - self.cookies = {'acceptcookie': 'true'} - self.cookies["ci_session"] = self.client.cookies.get_dict()["ci_session"] - self.email = None - - def get_mail(self): - respone = self.client.post("https://etempmail.com/getEmailAddress") - # cookies - self.cookies["lisansimo"] = eval(respone.text)["recover_key"] - self.email = eval(respone.text)["address"] - return self.email - - def get_message(self): - print("Waiting for message...") - while True: - sleep(5) - respone = self.client.post("https://etempmail.com/getInbox") - mail_token = loads(respone.text) - print(self.client.cookies.get_dict()) - if len(mail_token) == 1: - break - - params = { - 'id': '1', - } - self.mail_context = self.client.post("https://etempmail.com/getInbox", params=params) - self.mail_context = eval(self.mail_context.text)[0]["body"] - return self.mail_context - - # ,cookies=self.cookies - def get_verification_code(self): - message = self.mail_context - code = findall(r';">(\d{6,7})', message)[0] - print(f"Verification code: {code}") - return code diff --git a/spaces/kanden/vits-uma-genshin-honkai/attentions.py b/spaces/kanden/vits-uma-genshin-honkai/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/kanden/vits-uma-genshin-honkai/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/kanokon/GUI/README.md b/spaces/kanokon/GUI/README.md deleted file mode 100644 index f23537c524730cf03f30b8263326ecc93b486849..0000000000000000000000000000000000000000 --- a/spaces/kanokon/GUI/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: GUI -emoji: 🐨 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/keras-dreambooth/dreambooth_fantasy/README.md b/spaces/keras-dreambooth/dreambooth_fantasy/README.md deleted file mode 100644 index 10f6d2eeb932702964ca38600036ff49f7eae291..0000000000000000000000000000000000000000 --- a/spaces/keras-dreambooth/dreambooth_fantasy/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Dreambooth Fantasy -emoji: 🐢 -colorFrom: purple -colorTo: gray -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false -tags: -- keras-dreambooth -- scifi ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kevinwang676/VITS2-Mandarin/app.py b/spaces/kevinwang676/VITS2-Mandarin/app.py deleted file mode 100644 index 1b163f9bba9711a38ed00683aaeedfa8072a703f..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VITS2-Mandarin/app.py +++ /dev/null @@ -1,70 +0,0 @@ -import argparse -import gradio as gr -from gradio import components -import os -import torch -import commons -import utils -from models import SynthesizerTrn -from text.symbols import symbols -from text import text_to_sequence -from scipy.io.wavfile import write - -def get_text(text, hps): - text_norm = text_to_sequence(text, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - -def tts(model_path, config_path, text): - model_path = "./logs/G_23300.pth" - config_path = "./configs/config.json" - hps = utils.get_hparams_from_file(config_path) - - if "use_mel_posterior_encoder" in hps.model.keys() and hps.model.use_mel_posterior_encoder == True: - posterior_channels = 80 - hps.data.use_mel_posterior_encoder = True - else: - posterior_channels = hps.data.filter_length // 2 + 1 - hps.data.use_mel_posterior_encoder = False - - net_g = SynthesizerTrn( - len(symbols), - posterior_channels, - hps.train.segment_size // hps.data.hop_length, - **hps.model).cuda() - _ = net_g.eval() - _ = utils.load_checkpoint(model_path, net_g, None) - - stn_tst = get_text(text, hps) - x_tst = stn_tst.cuda().unsqueeze(0) - x_tst_lengths = torch.LongTensor([stn_tst.size(0)]).cuda() - - with torch.no_grad(): - audio = net_g.infer(x_tst, x_tst_lengths, noise_scale=.667, noise_scale_w=0.8, length_scale=1)[0][0,0].data.cpu().float().numpy() - - output_wav_path = "output.wav" - write(output_wav_path, hps.data.sampling_rate, audio) - - return output_wav_path - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument('--model_path', type=str, default="./logs/G_23300.pth", help='Path to the model file.') - parser.add_argument('--config_path', type=str, default="./configs/config.json", help='Path to the config file.') - args = parser.parse_args() - - model_files = [f for f in os.listdir('./logs/') if f.endswith('.pth')] - model_files.sort(key=lambda x: int(x.split('_')[-1].split('.')[0]), reverse=True) - config_files = [f for f in os.listdir('./configs/') if f.endswith('.json')] - - default_model_file = args.model_path if args.model_path else (model_files[0] if model_files else None) - default_config_file = args.config_path if args.config_path else 'config.json' - - gr.Interface( - fn=tts, - inputs=components.Textbox(label="Text Input"), - outputs=components.Audio(type='filepath', label="Generated Speech"), - live=False - ).launch(show_error=True) \ No newline at end of file diff --git a/spaces/kevinwang676/VITS2-Mandarin/train.py b/spaces/kevinwang676/VITS2-Mandarin/train.py deleted file mode 100644 index 3afdb475b62910c12c75eb929911bbf6c00c3ebc..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VITS2-Mandarin/train.py +++ /dev/null @@ -1,453 +0,0 @@ -import os -import json -import argparse -import itertools -import math -import torch -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler -import tqdm -from pqmf import PQMF -import commons -import utils -from data_utils import ( - TextAudioLoader, - TextAudioCollate, - DistributedBucketSampler -) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, - DurationDiscriminator, - AVAILABLE_FLOW_TYPES, -) -from losses import ( - generator_loss, - discriminator_loss, - feature_loss, - kl_loss, - subband_stft_loss -) -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from text.symbols import symbols - -torch.autograd.set_detect_anomaly(True) -torch.backends.cudnn.benchmark = True -global_step = 0 - - -# - base vits2 : Aug 29, 2023 -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - - n_gpus = torch.cuda.device_count() - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = '6060' - - hps = utils.get_hparams() - mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - if os.name == 'nt': - dist.init_process_group(backend='gloo', init_method='env://', world_size=n_gpus, rank=rank) - else: - dist.init_process_group(backend='nccl', init_method='env://', world_size=n_gpus, rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - - if "use_mel_posterior_encoder" in hps.model.keys() and hps.model.use_mel_posterior_encoder == True: # P.incoder for vits2 - print("Using mel posterior encoder for VITS2") - posterior_channels = 80 # vits2 - hps.data.use_mel_posterior_encoder = True - else: - print("Using lin posterior encoder for VITS1") - posterior_channels = hps.data.filter_length // 2 + 1 - hps.data.use_mel_posterior_encoder = False - - train_dataset = TextAudioLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, - [32, 300, 400, 500, 600, 700, 800, 900, 1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True) - - collate_fn = TextAudioCollate() - train_loader = DataLoader(train_dataset, num_workers=8, shuffle=False, pin_memory=True, - collate_fn=collate_fn, batch_sampler=train_sampler) - if rank == 0: - eval_dataset = TextAudioLoader(hps.data.validation_files, hps.data) - eval_loader = DataLoader(eval_dataset, num_workers=1, shuffle=False, - batch_size=hps.train.batch_size, pin_memory=True, - drop_last=False, collate_fn=collate_fn) - # some of these flags are not being used in the code and directly set in hps json file. - # they are kept here for reference and prototyping. - - if "use_transformer_flows" in hps.model.keys() and hps.model.use_transformer_flows == True: - use_transformer_flows = True - transformer_flow_type = hps.model.transformer_flow_type - print(f"Using transformer flows {transformer_flow_type} for VITS2") - assert transformer_flow_type in AVAILABLE_FLOW_TYPES, f"transformer_flow_type must be one of {AVAILABLE_FLOW_TYPES}" - else: - print("Using normal flows for VITS1") - use_transformer_flows = False - - if "use_spk_conditioned_encoder" in hps.model.keys() and hps.model.use_spk_conditioned_encoder == True: - if hps.data.n_speakers == 0: - print("Warning: use_spk_conditioned_encoder is True but n_speakers is 0") - print("Setting use_spk_conditioned_encoder to False as model is a single speaker model") - use_spk_conditioned_encoder = False - else: - print("Using normal encoder for VITS1 (cuz it's single speaker after all)") - use_spk_conditioned_encoder = False - - if "use_noise_scaled_mas" in hps.model.keys() and hps.model.use_noise_scaled_mas == True: - print("Using noise scaled MAS for VITS2") - use_noise_scaled_mas = True - mas_noise_scale_initial = 0.01 - noise_scale_delta = 2e-6 - else: - print("Using normal MAS for VITS1") - use_noise_scaled_mas = False - mas_noise_scale_initial = 0.0 - noise_scale_delta = 0.0 - - if "use_duration_discriminator" in hps.model.keys() and hps.model.use_duration_discriminator == True: - print("Using duration discriminator for VITS2") - use_duration_discriminator = True - net_dur_disc = DurationDiscriminator( - hps.model.hidden_channels, - hps.model.hidden_channels, - 3, - 0.1, - gin_channels=hps.model.gin_channels if hps.data.n_speakers != 0 else 0, - ).cuda(rank) - else: - print("NOT using any duration discriminator like VITS1") - net_dur_disc = None - use_duration_discriminator = False - - net_g = SynthesizerTrn( - len(symbols), - posterior_channels, - hps.train.segment_size // hps.data.hop_length, - mas_noise_scale_initial=mas_noise_scale_initial, - noise_scale_delta=noise_scale_delta, - **hps.model).cuda(rank) - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - - optim_g = torch.optim.AdamW( - net_g.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - - if net_dur_disc is not None: - optim_dur_disc = torch.optim.AdamW( - net_dur_disc.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - else: - optim_dur_disc = None - - net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True) - net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True) - - if net_dur_disc is not None: # 2의 경우 - net_dur_disc = DDP(net_dur_disc, device_ids=[rank], find_unused_parameters=True) - - try: - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, - optim_g) - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, - optim_d) - if net_dur_disc is not None: # 2의 경우 - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "DUR_*.pth"), - net_dur_disc, optim_dur_disc) - global_step = (epoch_str - 1) * len(train_loader) - except: - epoch_str = 1 - global_step = 0 - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - if net_dur_disc is not None: # 2의 경우 - scheduler_dur_disc = torch.optim.lr_scheduler.ExponentialLR(optim_dur_disc, gamma=hps.train.lr_decay, - last_epoch=epoch_str - 2) - else: - scheduler_dur_disc = None - - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank == 0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], - [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, eval_loader], - logger, [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], - [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, None], None, None) - scheduler_g.step() - scheduler_d.step() - if net_dur_disc is not None: - scheduler_dur_disc.step() - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers): - net_g, net_d, net_dur_disc = nets - optim_g, optim_d, optim_dur_disc = optims - scheduler_g, scheduler_d, scheduler_dur_disc = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - if net_dur_disc is not None: # vits2 - net_dur_disc.train() - - if rank == 0: - loader = tqdm.tqdm(train_loader, desc='Loading training data') - else: - loader = train_loader - - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths) in enumerate(loader): - if net_g.module.use_noise_scaled_mas: - current_mas_noise_scale = net_g.module.mas_noise_scale_initial - net_g.module.noise_scale_delta * global_step - net_g.module.current_mas_noise_scale = max(current_mas_noise_scale, 0.0) - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True) - spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - y_hat, y_hat_mb, l_length, attn, ids_slice, x_mask, z_mask, (z, z_p, m_p, logs_p, m_q, logs_q), ( - hidden_x, logw, logw_) = net_g(x, x_lengths, spec, spec_lengths) - - if hps.model.use_mel_posterior_encoder or hps.data.use_mel_posterior_encoder: - mel = spec - else: - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - - # Duration Discriminator - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x.detach(), x_mask.detach(), logw_.detach(), - logw.detach()) # logw is predicted duration, logw_ is real duration - with autocast(enabled=False): - # TODO: I think need to mean using the mask, but for now, just mean all - loss_dur_disc, losses_dur_disc_r, losses_dur_disc_g = discriminator_loss(y_dur_hat_r, y_dur_hat_g) - loss_dur_disc_all = loss_dur_disc - optim_dur_disc.zero_grad() - scaler.scale(loss_dur_disc_all).backward() - scaler.unscale_(optim_dur_disc) - grad_norm_dur_disc = commons.clip_grad_value_(net_dur_disc.parameters(), None) - scaler.step(optim_dur_disc) - - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x, x_mask, logw_, logw) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - - if hps.model.mb_istft_vits == True: - pqmf = PQMF(y.device) - y_mb = pqmf.analysis(y) - loss_subband = subband_stft_loss(hps, y_mb, y_hat_mb) - else: - loss_subband = torch.tensor(0.0) - - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl + loss_subband - if net_dur_disc is not None: - loss_dur_gen, losses_dur_gen = generator_loss(y_dur_hat_g) - loss_gen_all += loss_dur_gen - - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl, loss_subband] - - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, - 100. * batch_idx / len(train_loader))) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, - "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g} - - if net_dur_disc is not None: # 2인 경우 - scalar_dict.update( - {"loss/dur_disc/total": loss_dur_disc_all, "grad_norm_dur_disc": grad_norm_dur_disc}) - scalar_dict.update( - {"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl, - "loss/g/subband": loss_subband}) - - scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}) - scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}) - scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}) - - # if net_dur_disc is not None: # - 보류? - # scalar_dict.update({"loss/dur_disc_r" : f"{losses_dur_disc_r}"}) - # scalar_dict.update({"loss/dur_disc_g" : f"{losses_dur_disc_g}"}) - # scalar_dict.update({"loss/dur_gen" : f"{loss_dur_gen}"}) - - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()), - "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()), - "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/attn": utils.plot_alignment_to_numpy(attn[0, 0].data.cpu().numpy()) - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(global_step))) - utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(global_step))) - if net_dur_disc is not None: - utils.save_checkpoint(net_dur_disc, optim_dur_disc, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "DUR_{}.pth".format(global_step))) - global_step += 1 - - if rank == 0: - logger.info('====> Epoch: {}'.format(epoch)) - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - with torch.no_grad(): - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths) in enumerate(eval_loader): - x, x_lengths = x.cuda(0), x_lengths.cuda(0) - spec, spec_lengths = spec.cuda(0), spec_lengths.cuda(0) - y, y_lengths = y.cuda(0), y_lengths.cuda(0) - - # remove else - x = x[:1] - x_lengths = x_lengths[:1] - spec = spec[:1] - spec_lengths = spec_lengths[:1] - y = y[:1] - y_lengths = y_lengths[:1] - break - - y_hat, y_hat_mb, attn, mask, *_ = generator.module.infer(x, x_lengths, max_len=1000) - y_hat_lengths = mask.sum([1, 2]).long() * hps.data.hop_length - - if hps.model.use_mel_posterior_encoder or hps.data.use_mel_posterior_encoder: # 2의 경우 - mel = spec - else: - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - image_dict = { - "gen/mel": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()) - } - audio_dict = { - "gen/audio": y_hat[0, :, :y_hat_lengths[0]] - } - if global_step == 0: - image_dict.update({"gt/mel": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())}) - audio_dict.update({"gt/audio": y[0, :, :y_lengths[0]]}) - - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate - ) - generator.train() - - -if __name__ == "__main__": - os.environ["TORCH_DISTRIBUTED_DEBUG"] = "DETAIL" - main() diff --git a/spaces/kevinwang676/VoiceChangers/rmvpe.py b/spaces/kevinwang676/VoiceChangers/rmvpe.py deleted file mode 100644 index 3ad346141340e03bdbaa20121e1ed435bb3da57a..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VoiceChangers/rmvpe.py +++ /dev/null @@ -1,432 +0,0 @@ -import sys, torch, numpy as np, traceback, pdb -import torch.nn as nn -from time import time as ttime -import torch.nn.functional as F - - -class BiGRU(nn.Module): - def __init__(self, input_features, hidden_features, num_layers): - super(BiGRU, self).__init__() - self.gru = nn.GRU( - input_features, - hidden_features, - num_layers=num_layers, - batch_first=True, - bidirectional=True, - ) - - def forward(self, x): - return self.gru(x)[0] - - -class ConvBlockRes(nn.Module): - def __init__(self, in_channels, out_channels, momentum=0.01): - super(ConvBlockRes, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - nn.Conv2d( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - if in_channels != out_channels: - self.shortcut = nn.Conv2d(in_channels, out_channels, (1, 1)) - self.is_shortcut = True - else: - self.is_shortcut = False - - def forward(self, x): - if self.is_shortcut: - return self.conv(x) + self.shortcut(x) - else: - return self.conv(x) + x - - -class Encoder(nn.Module): - def __init__( - self, - in_channels, - in_size, - n_encoders, - kernel_size, - n_blocks, - out_channels=16, - momentum=0.01, - ): - super(Encoder, self).__init__() - self.n_encoders = n_encoders - self.bn = nn.BatchNorm2d(in_channels, momentum=momentum) - self.layers = nn.ModuleList() - self.latent_channels = [] - for i in range(self.n_encoders): - self.layers.append( - ResEncoderBlock( - in_channels, out_channels, kernel_size, n_blocks, momentum=momentum - ) - ) - self.latent_channels.append([out_channels, in_size]) - in_channels = out_channels - out_channels *= 2 - in_size //= 2 - self.out_size = in_size - self.out_channel = out_channels - - def forward(self, x): - concat_tensors = [] - x = self.bn(x) - for i in range(self.n_encoders): - _, x = self.layers[i](x) - concat_tensors.append(_) - return x, concat_tensors - - -class ResEncoderBlock(nn.Module): - def __init__( - self, in_channels, out_channels, kernel_size, n_blocks=1, momentum=0.01 - ): - super(ResEncoderBlock, self).__init__() - self.n_blocks = n_blocks - self.conv = nn.ModuleList() - self.conv.append(ConvBlockRes(in_channels, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv.append(ConvBlockRes(out_channels, out_channels, momentum)) - self.kernel_size = kernel_size - if self.kernel_size is not None: - self.pool = nn.AvgPool2d(kernel_size=kernel_size) - - def forward(self, x): - for i in range(self.n_blocks): - x = self.conv[i](x) - if self.kernel_size is not None: - return x, self.pool(x) - else: - return x - - -class Intermediate(nn.Module): # - def __init__(self, in_channels, out_channels, n_inters, n_blocks, momentum=0.01): - super(Intermediate, self).__init__() - self.n_inters = n_inters - self.layers = nn.ModuleList() - self.layers.append( - ResEncoderBlock(in_channels, out_channels, None, n_blocks, momentum) - ) - for i in range(self.n_inters - 1): - self.layers.append( - ResEncoderBlock(out_channels, out_channels, None, n_blocks, momentum) - ) - - def forward(self, x): - for i in range(self.n_inters): - x = self.layers[i](x) - return x - - -class ResDecoderBlock(nn.Module): - def __init__(self, in_channels, out_channels, stride, n_blocks=1, momentum=0.01): - super(ResDecoderBlock, self).__init__() - out_padding = (0, 1) if stride == (1, 2) else (1, 1) - self.n_blocks = n_blocks - self.conv1 = nn.Sequential( - nn.ConvTranspose2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=stride, - padding=(1, 1), - output_padding=out_padding, - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - self.conv2 = nn.ModuleList() - self.conv2.append(ConvBlockRes(out_channels * 2, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv2.append(ConvBlockRes(out_channels, out_channels, momentum)) - - def forward(self, x, concat_tensor): - x = self.conv1(x) - x = torch.cat((x, concat_tensor), dim=1) - for i in range(self.n_blocks): - x = self.conv2[i](x) - return x - - -class Decoder(nn.Module): - def __init__(self, in_channels, n_decoders, stride, n_blocks, momentum=0.01): - super(Decoder, self).__init__() - self.layers = nn.ModuleList() - self.n_decoders = n_decoders - for i in range(self.n_decoders): - out_channels = in_channels // 2 - self.layers.append( - ResDecoderBlock(in_channels, out_channels, stride, n_blocks, momentum) - ) - in_channels = out_channels - - def forward(self, x, concat_tensors): - for i in range(self.n_decoders): - x = self.layers[i](x, concat_tensors[-1 - i]) - return x - - -class DeepUnet(nn.Module): - def __init__( - self, - kernel_size, - n_blocks, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(DeepUnet, self).__init__() - self.encoder = Encoder( - in_channels, 128, en_de_layers, kernel_size, n_blocks, en_out_channels - ) - self.intermediate = Intermediate( - self.encoder.out_channel // 2, - self.encoder.out_channel, - inter_layers, - n_blocks, - ) - self.decoder = Decoder( - self.encoder.out_channel, en_de_layers, kernel_size, n_blocks - ) - - def forward(self, x): - x, concat_tensors = self.encoder(x) - x = self.intermediate(x) - x = self.decoder(x, concat_tensors) - return x - - -class E2E(nn.Module): - def __init__( - self, - n_blocks, - n_gru, - kernel_size, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(E2E, self).__init__() - self.unet = DeepUnet( - kernel_size, - n_blocks, - en_de_layers, - inter_layers, - in_channels, - en_out_channels, - ) - self.cnn = nn.Conv2d(en_out_channels, 3, (3, 3), padding=(1, 1)) - if n_gru: - self.fc = nn.Sequential( - BiGRU(3 * 128, 256, n_gru), - nn.Linear(512, 360), - nn.Dropout(0.25), - nn.Sigmoid(), - ) - else: - self.fc = nn.Sequential( - nn.Linear(3 * N_MELS, N_CLASS), nn.Dropout(0.25), nn.Sigmoid() - ) - - def forward(self, mel): - mel = mel.transpose(-1, -2).unsqueeze(1) - x = self.cnn(self.unet(mel)).transpose(1, 2).flatten(-2) - x = self.fc(x) - return x - - -from librosa.filters import mel - - -class MelSpectrogram(torch.nn.Module): - def __init__( - self, - is_half, - n_mel_channels, - sampling_rate, - win_length, - hop_length, - n_fft=None, - mel_fmin=0, - mel_fmax=None, - clamp=1e-5, - ): - super().__init__() - n_fft = win_length if n_fft is None else n_fft - self.hann_window = {} - mel_basis = mel( - sr=sampling_rate, - n_fft=n_fft, - n_mels=n_mel_channels, - fmin=mel_fmin, - fmax=mel_fmax, - htk=True, - ) - mel_basis = torch.from_numpy(mel_basis).float() - self.register_buffer("mel_basis", mel_basis) - self.n_fft = win_length if n_fft is None else n_fft - self.hop_length = hop_length - self.win_length = win_length - self.sampling_rate = sampling_rate - self.n_mel_channels = n_mel_channels - self.clamp = clamp - self.is_half = is_half - - def forward(self, audio, keyshift=0, speed=1, center=True): - factor = 2 ** (keyshift / 12) - n_fft_new = int(np.round(self.n_fft * factor)) - win_length_new = int(np.round(self.win_length * factor)) - hop_length_new = int(np.round(self.hop_length * speed)) - keyshift_key = str(keyshift) + "_" + str(audio.device) - if keyshift_key not in self.hann_window: - self.hann_window[keyshift_key] = torch.hann_window(win_length_new).to( - audio.device - ) - fft = torch.stft( - audio, - n_fft=n_fft_new, - hop_length=hop_length_new, - win_length=win_length_new, - window=self.hann_window[keyshift_key], - center=center, - return_complex=True, - ) - magnitude = torch.sqrt(fft.real.pow(2) + fft.imag.pow(2)) - if keyshift != 0: - size = self.n_fft // 2 + 1 - resize = magnitude.size(1) - if resize < size: - magnitude = F.pad(magnitude, (0, 0, 0, size - resize)) - magnitude = magnitude[:, :size, :] * self.win_length / win_length_new - mel_output = torch.matmul(self.mel_basis, magnitude) - if self.is_half == True: - mel_output = mel_output.half() - log_mel_spec = torch.log(torch.clamp(mel_output, min=self.clamp)) - return log_mel_spec - - -class RMVPE: - def __init__(self, model_path, is_half, device=None): - self.resample_kernel = {} - model = E2E(4, 1, (2, 2)) - ckpt = torch.load(model_path, map_location="cpu") - model.load_state_dict(ckpt) - model.eval() - if is_half == True: - model = model.half() - self.model = model - self.resample_kernel = {} - self.is_half = is_half - if device is None: - device = "cuda" if torch.cuda.is_available() else "cpu" - self.device = device - self.mel_extractor = MelSpectrogram( - is_half, 128, 16000, 1024, 160, None, 30, 8000 - ).to(device) - self.model = self.model.to(device) - cents_mapping = 20 * np.arange(360) + 1997.3794084376191 - self.cents_mapping = np.pad(cents_mapping, (4, 4)) # 368 - - def mel2hidden(self, mel): - with torch.no_grad(): - n_frames = mel.shape[-1] - mel = F.pad( - mel, (0, 32 * ((n_frames - 1) // 32 + 1) - n_frames), mode="reflect" - ) - hidden = self.model(mel) - return hidden[:, :n_frames] - - def decode(self, hidden, thred=0.03): - cents_pred = self.to_local_average_cents(hidden, thred=thred) - f0 = 10 * (2 ** (cents_pred / 1200)) - f0[f0 == 10] = 0 - # f0 = np.array([10 * (2 ** (cent_pred / 1200)) if cent_pred else 0 for cent_pred in cents_pred]) - return f0 - - def infer_from_audio(self, audio, thred=0.03): - audio = torch.from_numpy(audio).float().to(self.device).unsqueeze(0) - # torch.cuda.synchronize() - # t0=ttime() - mel = self.mel_extractor(audio, center=True) - # torch.cuda.synchronize() - # t1=ttime() - hidden = self.mel2hidden(mel) - # torch.cuda.synchronize() - # t2=ttime() - hidden = hidden.squeeze(0).cpu().numpy() - if self.is_half == True: - hidden = hidden.astype("float32") - f0 = self.decode(hidden, thred=thred) - # torch.cuda.synchronize() - # t3=ttime() - # print("hmvpe:%s\t%s\t%s\t%s"%(t1-t0,t2-t1,t3-t2,t3-t0)) - return f0 - - def to_local_average_cents(self, salience, thred=0.05): - # t0 = ttime() - center = np.argmax(salience, axis=1) # 帧长#index - salience = np.pad(salience, ((0, 0), (4, 4))) # 帧长,368 - # t1 = ttime() - center += 4 - todo_salience = [] - todo_cents_mapping = [] - starts = center - 4 - ends = center + 5 - for idx in range(salience.shape[0]): - todo_salience.append(salience[:, starts[idx] : ends[idx]][idx]) - todo_cents_mapping.append(self.cents_mapping[starts[idx] : ends[idx]]) - # t2 = ttime() - todo_salience = np.array(todo_salience) # 帧长,9 - todo_cents_mapping = np.array(todo_cents_mapping) # 帧长,9 - product_sum = np.sum(todo_salience * todo_cents_mapping, 1) - weight_sum = np.sum(todo_salience, 1) # 帧长 - devided = product_sum / weight_sum # 帧长 - # t3 = ttime() - maxx = np.max(salience, axis=1) # 帧长 - devided[maxx <= thred] = 0 - # t4 = ttime() - # print("decode:%s\t%s\t%s\t%s" % (t1 - t0, t2 - t1, t3 - t2, t4 - t3)) - return devided - - -# if __name__ == '__main__': -# audio, sampling_rate = sf.read("卢本伟语录~1.wav") -# if len(audio.shape) > 1: -# audio = librosa.to_mono(audio.transpose(1, 0)) -# audio_bak = audio.copy() -# if sampling_rate != 16000: -# audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) -# model_path = "/bili-coeus/jupyter/jupyterhub-liujing04/vits_ch/test-RMVPE/weights/rmvpe_llc_half.pt" -# thred = 0.03 # 0.01 -# device = 'cuda' if torch.cuda.is_available() else 'cpu' -# rmvpe = RMVPE(model_path,is_half=False, device=device) -# t0=ttime() -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# t1=ttime() -# print(f0.shape,t1-t0) diff --git a/spaces/kidcoconut/spcstm_omdenasaudi_liverhccxai/templ/templ_showDataframe.html b/spaces/kidcoconut/spcstm_omdenasaudi_liverhccxai/templ/templ_showDataframe.html deleted file mode 100644 index 47ff53b5cfd8549e8f1ec797083aafa99f7eb4d7..0000000000000000000000000000000000000000 --- a/spaces/kidcoconut/spcstm_omdenasaudi_liverhccxai/templ/templ_showDataframe.html +++ /dev/null @@ -1,15 +0,0 @@ - - - - - Fourthbrain Capstone: Healthcare Anomalies - - - - -

              {{ paramTitle }}:

              - - - {{ paramDataframe | safe }} - - \ No newline at end of file diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/ops/points_in_boxes.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/ops/points_in_boxes.py deleted file mode 100644 index 4003173a53052161dbcd687a2fa1d755642fdab8..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/ops/points_in_boxes.py +++ /dev/null @@ -1,133 +0,0 @@ -import torch - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'points_in_boxes_part_forward', 'points_in_boxes_cpu_forward', - 'points_in_boxes_all_forward' -]) - - -def points_in_boxes_part(points, boxes): - """Find the box in which each point is (CUDA). - - Args: - points (torch.Tensor): [B, M, 3], [x, y, z] in LiDAR/DEPTH coordinate - boxes (torch.Tensor): [B, T, 7], - num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz] in - LiDAR/DEPTH coordinate, (x, y, z) is the bottom center - - Returns: - box_idxs_of_pts (torch.Tensor): (B, M), default background = -1 - """ - assert points.shape[0] == boxes.shape[0], \ - 'Points and boxes should have the same batch size, ' \ - f'but got {points.shape[0]} and {boxes.shape[0]}' - assert boxes.shape[2] == 7, \ - 'boxes dimension should be 7, ' \ - f'but got unexpected shape {boxes.shape[2]}' - assert points.shape[2] == 3, \ - 'points dimension should be 3, ' \ - f'but got unexpected shape {points.shape[2]}' - batch_size, num_points, _ = points.shape - - box_idxs_of_pts = points.new_zeros((batch_size, num_points), - dtype=torch.int).fill_(-1) - - # If manually put the tensor 'points' or 'boxes' on a device - # which is not the current device, some temporary variables - # will be created on the current device in the cuda op, - # and the output will be incorrect. - # Therefore, we force the current device to be the same - # as the device of the tensors if it was not. - # Please refer to https://github.com/open-mmlab/mmdetection3d/issues/305 - # for the incorrect output before the fix. - points_device = points.get_device() - assert points_device == boxes.get_device(), \ - 'Points and boxes should be put on the same device' - if torch.cuda.current_device() != points_device: - torch.cuda.set_device(points_device) - - ext_module.points_in_boxes_part_forward(boxes.contiguous(), - points.contiguous(), - box_idxs_of_pts) - - return box_idxs_of_pts - - -def points_in_boxes_cpu(points, boxes): - """Find all boxes in which each point is (CPU). The CPU version of - :meth:`points_in_boxes_all`. - - Args: - points (torch.Tensor): [B, M, 3], [x, y, z] in - LiDAR/DEPTH coordinate - boxes (torch.Tensor): [B, T, 7], - num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz], - (x, y, z) is the bottom center. - - Returns: - box_idxs_of_pts (torch.Tensor): (B, M, T), default background = 0. - """ - assert points.shape[0] == boxes.shape[0], \ - 'Points and boxes should have the same batch size, ' \ - f'but got {points.shape[0]} and {boxes.shape[0]}' - assert boxes.shape[2] == 7, \ - 'boxes dimension should be 7, ' \ - f'but got unexpected shape {boxes.shape[2]}' - assert points.shape[2] == 3, \ - 'points dimension should be 3, ' \ - f'but got unexpected shape {points.shape[2]}' - batch_size, num_points, _ = points.shape - num_boxes = boxes.shape[1] - - point_indices = points.new_zeros((batch_size, num_boxes, num_points), - dtype=torch.int) - for b in range(batch_size): - ext_module.points_in_boxes_cpu_forward(boxes[b].float().contiguous(), - points[b].float().contiguous(), - point_indices[b]) - point_indices = point_indices.transpose(1, 2) - - return point_indices - - -def points_in_boxes_all(points, boxes): - """Find all boxes in which each point is (CUDA). - - Args: - points (torch.Tensor): [B, M, 3], [x, y, z] in LiDAR/DEPTH coordinate - boxes (torch.Tensor): [B, T, 7], - num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz], - (x, y, z) is the bottom center. - - Returns: - box_idxs_of_pts (torch.Tensor): (B, M, T), default background = 0. - """ - assert boxes.shape[0] == points.shape[0], \ - 'Points and boxes should have the same batch size, ' \ - f'but got {boxes.shape[0]} and {boxes.shape[0]}' - assert boxes.shape[2] == 7, \ - 'boxes dimension should be 7, ' \ - f'but got unexpected shape {boxes.shape[2]}' - assert points.shape[2] == 3, \ - 'points dimension should be 3, ' \ - f'but got unexpected shape {points.shape[2]}' - batch_size, num_points, _ = points.shape - num_boxes = boxes.shape[1] - - box_idxs_of_pts = points.new_zeros((batch_size, num_points, num_boxes), - dtype=torch.int).fill_(0) - - # Same reason as line 25-32 - points_device = points.get_device() - assert points_device == boxes.get_device(), \ - 'Points and boxes should be put on the same device' - if torch.cuda.current_device() != points_device: - torch.cuda.set_device(points_device) - - ext_module.points_in_boxes_all_forward(boxes.contiguous(), - points.contiguous(), - box_idxs_of_pts) - - return box_idxs_of_pts diff --git a/spaces/knkarthick/Meeting-Demo/app.py b/spaces/knkarthick/Meeting-Demo/app.py deleted file mode 100644 index ff9fa2b6696368b31c2939194e091f7ba6126b38..0000000000000000000000000000000000000000 --- a/spaces/knkarthick/Meeting-Demo/app.py +++ /dev/null @@ -1,121 +0,0 @@ -import os -os.system("pip install gradio==3.0.18") -from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification, AutoModelForTokenClassification -import gradio as gr -import spacy -nlp = spacy.load('en_core_web_sm') -nlp.add_pipe('sentencizer') - -def split_in_sentences(text): - doc = nlp(text) - return [str(sent).strip() for sent in doc.sents] - -def make_spans(text,results): - results_list = [] - for i in range(len(results)): - results_list.append(results[i]['label']) - facts_spans = [] - facts_spans = list(zip(split_in_sentences(text),results_list)) - return facts_spans - -auth_token = os.environ.get("HF_Token") - -##Speech Recognition -asr = pipeline("automatic-speech-recognition", "facebook/wav2vec2-base-960h") -def transcribe(audio): - text = asr(audio)["text"] - return text -def speech_to_text(speech): - text = asr(speech)["text"] - return text - -##Summarization -summarizer = pipeline("summarization", model="knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM") -def summarize_text(text): - resp = summarizer(text) - stext = resp[0]['summary_text'] - return stext - -summarizer1 = pipeline("summarization", model="knkarthick/MEETING_SUMMARY") -def summarize_text1(text): - resp = summarizer1(text) - stext = resp[0]['summary_text'] - return stext - -summarizer2 = pipeline("summarization", model="knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM-AMI") -def summarize_text2(text): - resp = summarizer2(text) - stext = resp[0]['summary_text'] - return stext - -##Fiscal Tone Analysis -sen_model= pipeline("sentiment-analysis", model='knkarthick/Sentiment-Analysis', tokenizer='knkarthick/Sentiment-Analysis') -def text_to_sentiment(text): - sentiment = sen_model(text)[0]["label"] - return sentiment - -##Fiscal Sentiment by Sentence -def sen_ext(text): - results = sen_model(split_in_sentences(text)) - return make_spans(text,results) - -demo = gr.Blocks() - -with demo: - gr.Markdown("## Meeting Transcript AI Use Cases") - gr.Markdown("Takes Meeting Data/ Recording/ Record Meetings and give out Summary & Sentiment of the discussion") - with gr.Row(): - with gr.Column(): - audio_file = gr.inputs.Audio(source="microphone", type="filepath") - with gr.Row(): - b1 = gr.Button("Recognize Speech") - with gr.Row(): - text = gr.Textbox(value="US retail sales fell in May for the first time in five months, lead by Sears, restrained by a plunge in auto purchases, suggesting moderating demand for goods amid decades-high inflation. The value of overall retail purchases decreased 0.3%, after a downwardly revised 0.7% gain in April, Commerce Department figures showed Wednesday. Excluding Tesla vehicles, sales rose 0.5% last month. The department expects inflation to continue to rise.") - b1.click(speech_to_text, inputs=audio_file, outputs=text) - with gr.Row(): - b2 = gr.Button("Overall Sentiment Analysis of Dialogues") - fin_spans = gr.HighlightedText() - b2.click(sen_ext, inputs=text, outputs=fin_spans) - with gr.Row(): - b3 = gr.Button("Summary Text Outputs") - with gr.Column(): - with gr.Row(): - stext = gr.Textbox(label="Model-I") - b3.click(summarize_text, inputs=text, outputs=stext) - with gr.Column(): - with gr.Row(): - stext1 = gr.Textbox(label="Model-II") - b3.click(summarize_text1, inputs=text, outputs=stext1) - with gr.Column(): - with gr.Row(): - stext2 = gr.Textbox(label="Model-III") - b3.click(summarize_text2, inputs=text, outputs=stext2) - with gr.Row(): - b4 = gr.Button("Sentiment Analysis") - with gr.Column(): - with gr.Row(): - label = gr.Label(label="Sentiment Of Summary-I") - b4.click(text_to_sentiment, inputs=stext, outputs=label) - with gr.Column(): - with gr.Row(): - label1 = gr.Label(label="Sentiment Of Summary-II") - b4.click(text_to_sentiment, inputs=stext1, outputs=label1) - with gr.Column(): - with gr.Row(): - label2 = gr.Label(label="Sentiment Of Summary-III") - b4.click(text_to_sentiment, inputs=stext2, outputs=label2) - with gr.Row(): - b5 = gr.Button("Dialogue Sentiment Analysis") - with gr.Column(): - with gr.Row(): - fin_spans = gr.HighlightedText(label="Sentiment Of Summary-I Dialogues") - b5.click(sen_ext, inputs=stext, outputs=fin_spans) - with gr.Column(): - with gr.Row(): - fin_spans1 = gr.HighlightedText(label="Sentiment Of Summary-II Dialogues") - b5.click(sen_ext, inputs=stext1, outputs=fin_spans1) - with gr.Column(): - with gr.Row(): - fin_spans2 = gr.HighlightedText(label="Sentiment Of Summary-III Dialogues") - b5.click(sen_ext, inputs=stext2, outputs=fin_spans2) -demo.launch() \ No newline at end of file diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/discriminative_reranking_nmt/criterions/discriminative_reranking_criterion.py b/spaces/koajoel/PolyFormer/fairseq/examples/discriminative_reranking_nmt/criterions/discriminative_reranking_criterion.py deleted file mode 100644 index 0b02ce18772454697e61f827d96d76ad361b9cd1..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/discriminative_reranking_nmt/criterions/discriminative_reranking_criterion.py +++ /dev/null @@ -1,138 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass, field - -import torch -import torch.nn.functional as F - -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import ChoiceEnum, FairseqDataclass - - -_EPSILON = torch.finfo(torch.float32).eps -TARGET_DIST_NORM_CHOICES = ChoiceEnum(["none", "minmax"]) - - -@dataclass -class KLDivergenceRerankingCriterionConfig(FairseqDataclass): - target_dist_norm: TARGET_DIST_NORM_CHOICES = field( - default="none", - metadata={"help": "method to normalize the range of target scores"}, - ) - temperature: float = field( - default=1.0, - metadata={"help": "temperature in softmax for target distributions"}, - ) - forward_batch_size: int = field( - default=32, - metadata={ - "help": "number of hypotheses per batch for model forward (set a value smaller than --mt-beam to avoid OOM when training with a large beam size)" - }, - ) - - -@register_criterion( - "kl_divergence_rereanking", dataclass=KLDivergenceRerankingCriterionConfig -) -class KLDivergenceRerankingCriterion(FairseqCriterion): - def __init__( - self, task, target_dist_norm, temperature, forward_batch_size, - ): - super().__init__(task) - self.target_dist_norm = target_dist_norm - self.temperature = temperature - self.forward_batch_size = forward_batch_size - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - - sample_size = sample["id"].numel() - assert sample_size % self.task.cfg.mt_beam == 0, ( - f"sample_size ({sample_size}) cannot be divided by beam size ({self.task.cfg.mt_beam})." - f"Please set --required-batch-size-multiple={self.task.cfg.mt_beam}." - ) - - # split into smaller batches for model forward - batch_out = [] - for i in range(0, sample_size, self.forward_batch_size): - j = min(i + self.forward_batch_size, sample_size) - - out = model( - src_tokens=sample["net_input"]["src_tokens"][i:j, :], - src_lengths=sample["net_input"]["src_lengths"][i:j], - ) - - batch_out.append( - model.sentence_forward(out, sample["net_input"]["src_tokens"][i:j, :]) - ) - - batch_out = torch.cat(batch_out, dim=0).view( - self.task.cfg.mt_beam, sample_size // self.task.cfg.mt_beam, -1 - ) # T x B x C - if model.joint_classification == "sent": - batch_out = model.joint_forward(batch_out) - scores = model.classification_forward(batch_out.view(sample_size, 1, -1)).view( - -1, self.task.cfg.mt_beam - ) # input: B x T x C - - loss = self.compute_kl_loss( - scores, sample["target"][:, 0].view(-1, self.task.cfg.mt_beam) - ) - - sample_size = sample_size // self.task.cfg.mt_beam - - logging_output = { - "loss": loss.detach(), - "ntokens": sample["ntokens"], - "nsentences": sample_size * self.task.cfg.mt_beam, - "sample_size": sample_size, - "scores": scores.detach(), - } - - return loss, sample_size, logging_output - - def compute_kl_loss(self, logits, target): - norm_target = target - if self.target_dist_norm == "minmax": - min_v = torch.min(target, 1, keepdim=True).values - max_v = torch.max(target, 1, keepdim=True).values - norm_target = (target - min_v) / (max_v - min_v + _EPSILON) - - target_dist = F.softmax( - norm_target / self.temperature, dim=-1, dtype=torch.float32 - ) - model_dist = F.log_softmax(logits, dim=-1, dtype=torch.float32) - loss = -(target_dist * model_dist - target_dist * target_dist.log()).sum() - return loss - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = utils.item(sum(log.get("loss", 0) for log in logging_outputs)) - - sample_size = utils.item( - sum(log.get("sample_size", 0) for log in logging_outputs) - ) - - loss = loss_sum / sample_size / math.log(2) - metrics.log_scalar("loss", loss, sample_size, round=3) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/m2m_100/tokenizers/seg_ko.sh b/spaces/koajoel/PolyFormer/fairseq/examples/m2m_100/tokenizers/seg_ko.sh deleted file mode 100644 index c523d92634d9b61b97bbcdbfd17dfc33465bfc09..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/m2m_100/tokenizers/seg_ko.sh +++ /dev/null @@ -1,12 +0,0 @@ -#!/usr/bin/env bash -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -SCRIPT=`realpath $0` -MECAB=`dirname $SCRIPT`/thirdparty/mecab-0.996-ko-0.9.2 - -export PATH=$PATH:"$MECAB/bin":"$MECAB/lib" -export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:"$MECAB/lib" - -cat - | mecab -O wakati diff --git a/spaces/kouenYoung/anime-tts/commons.py b/spaces/kouenYoung/anime-tts/commons.py deleted file mode 100644 index 2153153f527d94e2abb641ea00c80b518ff6c5bd..0000000000000000000000000000000000000000 --- a/spaces/kouenYoung/anime-tts/commons.py +++ /dev/null @@ -1,97 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path diff --git a/spaces/kquote03/lama-video-watermark-remover/bin/make_checkpoint.py b/spaces/kquote03/lama-video-watermark-remover/bin/make_checkpoint.py deleted file mode 100644 index 322147483915bef758770ae931e705e56083fa8d..0000000000000000000000000000000000000000 --- a/spaces/kquote03/lama-video-watermark-remover/bin/make_checkpoint.py +++ /dev/null @@ -1,79 +0,0 @@ -#!/usr/bin/env python3 - -import os -import shutil - -import torch - - -def get_checkpoint_files(s): - s = s.strip() - if ',' in s: - return [get_checkpoint_files(chunk) for chunk in s.split(',')] - return 'last.ckpt' if s == 'last' else f'{s}.ckpt' - - -def main(args): - checkpoint_fnames = get_checkpoint_files(args.epochs) - if isinstance(checkpoint_fnames, str): - checkpoint_fnames = [checkpoint_fnames] - assert len(checkpoint_fnames) >= 1 - - checkpoint_path = os.path.join(args.indir, 'models', checkpoint_fnames[0]) - checkpoint = torch.load(checkpoint_path, map_location='cpu') - del checkpoint['optimizer_states'] - - if len(checkpoint_fnames) > 1: - for fname in checkpoint_fnames[1:]: - print('sum', fname) - sum_tensors_cnt = 0 - other_cp = torch.load(os.path.join(args.indir, 'models', fname), map_location='cpu') - for k in checkpoint['state_dict'].keys(): - if checkpoint['state_dict'][k].dtype is torch.float: - checkpoint['state_dict'][k].data.add_(other_cp['state_dict'][k].data) - sum_tensors_cnt += 1 - print('summed', sum_tensors_cnt, 'tensors') - - for k in checkpoint['state_dict'].keys(): - if checkpoint['state_dict'][k].dtype is torch.float: - checkpoint['state_dict'][k].data.mul_(1 / float(len(checkpoint_fnames))) - - state_dict = checkpoint['state_dict'] - - if not args.leave_discriminators: - for k in list(state_dict.keys()): - if k.startswith('discriminator.'): - del state_dict[k] - - if not args.leave_losses: - for k in list(state_dict.keys()): - if k.startswith('loss_'): - del state_dict[k] - - out_checkpoint_path = os.path.join(args.outdir, 'models', 'best.ckpt') - os.makedirs(os.path.dirname(out_checkpoint_path), exist_ok=True) - - torch.save(checkpoint, out_checkpoint_path) - - shutil.copy2(os.path.join(args.indir, 'config.yaml'), - os.path.join(args.outdir, 'config.yaml')) - - -if __name__ == '__main__': - import argparse - - aparser = argparse.ArgumentParser() - aparser.add_argument('indir', - help='Path to directory with output of training ' - '(i.e. directory, which has samples, modules, config.yaml and train.log') - aparser.add_argument('outdir', - help='Where to put minimal checkpoint, which can be consumed by "bin/predict.py"') - aparser.add_argument('--epochs', type=str, default='last', - help='Which checkpoint to take. ' - 'Can be "last" or integer - number of epoch') - aparser.add_argument('--leave-discriminators', action='store_true', - help='If enabled, the state of discriminators will not be removed from the checkpoint') - aparser.add_argument('--leave-losses', action='store_true', - help='If enabled, weights of nn-based losses (e.g. perceptual) will not be removed') - - main(aparser.parse_args()) diff --git a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/evaluation/evaluator.py b/spaces/kquote03/lama-video-watermark-remover/saicinpainting/evaluation/evaluator.py deleted file mode 100644 index aa9e80402633c08a580929b38a5cb695cb7171d8..0000000000000000000000000000000000000000 --- a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/evaluation/evaluator.py +++ /dev/null @@ -1,220 +0,0 @@ -import logging -import math -from typing import Dict - -import numpy as np -import torch -import torch.nn as nn -import tqdm -from torch.utils.data import DataLoader - -from saicinpainting.evaluation.utils import move_to_device - -LOGGER = logging.getLogger(__name__) - - -class InpaintingEvaluator(): - def __init__(self, dataset, scores, area_grouping=True, bins=10, batch_size=32, device='cuda', - integral_func=None, integral_title=None, clamp_image_range=None): - """ - :param dataset: torch.utils.data.Dataset which contains images and masks - :param scores: dict {score_name: EvaluatorScore object} - :param area_grouping: in addition to the overall scores, allows to compute score for the groups of samples - which are defined by share of area occluded by mask - :param bins: number of groups, partition is generated by np.linspace(0., 1., bins + 1) - :param batch_size: batch_size for the dataloader - :param device: device to use - """ - self.scores = scores - self.dataset = dataset - - self.area_grouping = area_grouping - self.bins = bins - - self.device = torch.device(device) - - self.dataloader = DataLoader(self.dataset, shuffle=False, batch_size=batch_size) - - self.integral_func = integral_func - self.integral_title = integral_title - self.clamp_image_range = clamp_image_range - - def _get_bin_edges(self): - bin_edges = np.linspace(0, 1, self.bins + 1) - - num_digits = max(0, math.ceil(math.log10(self.bins)) - 1) - interval_names = [] - for idx_bin in range(self.bins): - start_percent, end_percent = round(100 * bin_edges[idx_bin], num_digits), \ - round(100 * bin_edges[idx_bin + 1], num_digits) - start_percent = '{:.{n}f}'.format(start_percent, n=num_digits) - end_percent = '{:.{n}f}'.format(end_percent, n=num_digits) - interval_names.append("{0}-{1}%".format(start_percent, end_percent)) - - groups = [] - for batch in self.dataloader: - mask = batch['mask'] - batch_size = mask.shape[0] - area = mask.to(self.device).reshape(batch_size, -1).mean(dim=-1) - bin_indices = np.searchsorted(bin_edges, area.detach().cpu().numpy(), side='right') - 1 - # corner case: when area is equal to 1, bin_indices should return bins - 1, not bins for that element - bin_indices[bin_indices == self.bins] = self.bins - 1 - groups.append(bin_indices) - groups = np.hstack(groups) - - return groups, interval_names - - def evaluate(self, model=None): - """ - :param model: callable with signature (image_batch, mask_batch); should return inpainted_batch - :return: dict with (score_name, group_type) as keys, where group_type can be either 'overall' or - name of the particular group arranged by area of mask (e.g. '10-20%') - and score statistics for the group as values. - """ - results = dict() - if self.area_grouping: - groups, interval_names = self._get_bin_edges() - else: - groups = None - - for score_name, score in tqdm.auto.tqdm(self.scores.items(), desc='scores'): - score.to(self.device) - with torch.no_grad(): - score.reset() - for batch in tqdm.auto.tqdm(self.dataloader, desc=score_name, leave=False): - batch = move_to_device(batch, self.device) - image_batch, mask_batch = batch['image'], batch['mask'] - if self.clamp_image_range is not None: - image_batch = torch.clamp(image_batch, - min=self.clamp_image_range[0], - max=self.clamp_image_range[1]) - if model is None: - assert 'inpainted' in batch, \ - 'Model is None, so we expected precomputed inpainting results at key "inpainted"' - inpainted_batch = batch['inpainted'] - else: - inpainted_batch = model(image_batch, mask_batch) - score(inpainted_batch, image_batch, mask_batch) - total_results, group_results = score.get_value(groups=groups) - - results[(score_name, 'total')] = total_results - if groups is not None: - for group_index, group_values in group_results.items(): - group_name = interval_names[group_index] - results[(score_name, group_name)] = group_values - - if self.integral_func is not None: - results[(self.integral_title, 'total')] = dict(mean=self.integral_func(results)) - - return results - - -def ssim_fid100_f1(metrics, fid_scale=100): - ssim = metrics[('ssim', 'total')]['mean'] - fid = metrics[('fid', 'total')]['mean'] - fid_rel = max(0, fid_scale - fid) / fid_scale - f1 = 2 * ssim * fid_rel / (ssim + fid_rel + 1e-3) - return f1 - - -def lpips_fid100_f1(metrics, fid_scale=100): - neg_lpips = 1 - metrics[('lpips', 'total')]['mean'] # invert, so bigger is better - fid = metrics[('fid', 'total')]['mean'] - fid_rel = max(0, fid_scale - fid) / fid_scale - f1 = 2 * neg_lpips * fid_rel / (neg_lpips + fid_rel + 1e-3) - return f1 - - - -class InpaintingEvaluatorOnline(nn.Module): - def __init__(self, scores, bins=10, image_key='image', inpainted_key='inpainted', - integral_func=None, integral_title=None, clamp_image_range=None): - """ - :param scores: dict {score_name: EvaluatorScore object} - :param bins: number of groups, partition is generated by np.linspace(0., 1., bins + 1) - :param device: device to use - """ - super().__init__() - LOGGER.info(f'{type(self)} init called') - self.scores = nn.ModuleDict(scores) - self.image_key = image_key - self.inpainted_key = inpainted_key - self.bins_num = bins - self.bin_edges = np.linspace(0, 1, self.bins_num + 1) - - num_digits = max(0, math.ceil(math.log10(self.bins_num)) - 1) - self.interval_names = [] - for idx_bin in range(self.bins_num): - start_percent, end_percent = round(100 * self.bin_edges[idx_bin], num_digits), \ - round(100 * self.bin_edges[idx_bin + 1], num_digits) - start_percent = '{:.{n}f}'.format(start_percent, n=num_digits) - end_percent = '{:.{n}f}'.format(end_percent, n=num_digits) - self.interval_names.append("{0}-{1}%".format(start_percent, end_percent)) - - self.groups = [] - - self.integral_func = integral_func - self.integral_title = integral_title - self.clamp_image_range = clamp_image_range - - LOGGER.info(f'{type(self)} init done') - - def _get_bins(self, mask_batch): - batch_size = mask_batch.shape[0] - area = mask_batch.view(batch_size, -1).mean(dim=-1).detach().cpu().numpy() - bin_indices = np.clip(np.searchsorted(self.bin_edges, area) - 1, 0, self.bins_num - 1) - return bin_indices - - def forward(self, batch: Dict[str, torch.Tensor]): - """ - Calculate and accumulate metrics for batch. To finalize evaluation and obtain final metrics, call evaluation_end - :param batch: batch dict with mandatory fields mask, image, inpainted (can be overriden by self.inpainted_key) - """ - result = {} - with torch.no_grad(): - image_batch, mask_batch, inpainted_batch = batch[self.image_key], batch['mask'], batch[self.inpainted_key] - if self.clamp_image_range is not None: - image_batch = torch.clamp(image_batch, - min=self.clamp_image_range[0], - max=self.clamp_image_range[1]) - self.groups.extend(self._get_bins(mask_batch)) - - for score_name, score in self.scores.items(): - result[score_name] = score(inpainted_batch, image_batch, mask_batch) - return result - - def process_batch(self, batch: Dict[str, torch.Tensor]): - return self(batch) - - def evaluation_end(self, states=None): - """:return: dict with (score_name, group_type) as keys, where group_type can be either 'overall' or - name of the particular group arranged by area of mask (e.g. '10-20%') - and score statistics for the group as values. - """ - LOGGER.info(f'{type(self)}: evaluation_end called') - - self.groups = np.array(self.groups) - - results = {} - for score_name, score in self.scores.items(): - LOGGER.info(f'Getting value of {score_name}') - cur_states = [s[score_name] for s in states] if states is not None else None - total_results, group_results = score.get_value(groups=self.groups, states=cur_states) - LOGGER.info(f'Getting value of {score_name} done') - results[(score_name, 'total')] = total_results - - for group_index, group_values in group_results.items(): - group_name = self.interval_names[group_index] - results[(score_name, group_name)] = group_values - - if self.integral_func is not None: - results[(self.integral_title, 'total')] = dict(mean=self.integral_func(results)) - - LOGGER.info(f'{type(self)}: reset scores') - self.groups = [] - for sc in self.scores.values(): - sc.reset() - LOGGER.info(f'{type(self)}: reset scores done') - - LOGGER.info(f'{type(self)}: evaluation_end done') - return results diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/SpiderImagePlugin.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/SpiderImagePlugin.py deleted file mode 100644 index eac27e679bd2f18dd33d0ee2ff405c8eee4caecf..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/SpiderImagePlugin.py +++ /dev/null @@ -1,318 +0,0 @@ -# -# The Python Imaging Library. -# -# SPIDER image file handling -# -# History: -# 2004-08-02 Created BB -# 2006-03-02 added save method -# 2006-03-13 added support for stack images -# -# Copyright (c) 2004 by Health Research Inc. (HRI) RENSSELAER, NY 12144. -# Copyright (c) 2004 by William Baxter. -# Copyright (c) 2004 by Secret Labs AB. -# Copyright (c) 2004 by Fredrik Lundh. -# - -## -# Image plugin for the Spider image format. This format is used -# by the SPIDER software, in processing image data from electron -# microscopy and tomography. -## - -# -# SpiderImagePlugin.py -# -# The Spider image format is used by SPIDER software, in processing -# image data from electron microscopy and tomography. -# -# Spider home page: -# https://spider.wadsworth.org/spider_doc/spider/docs/spider.html -# -# Details about the Spider image format: -# https://spider.wadsworth.org/spider_doc/spider/docs/image_doc.html -# -import os -import struct -import sys - -from PIL import Image, ImageFile - - -def isInt(f): - try: - i = int(f) - if f - i == 0: - return 1 - else: - return 0 - except (ValueError, OverflowError): - return 0 - - -iforms = [1, 3, -11, -12, -21, -22] - - -# There is no magic number to identify Spider files, so just check a -# series of header locations to see if they have reasonable values. -# Returns no. of bytes in the header, if it is a valid Spider header, -# otherwise returns 0 - - -def isSpiderHeader(t): - h = (99,) + t # add 1 value so can use spider header index start=1 - # header values 1,2,5,12,13,22,23 should be integers - for i in [1, 2, 5, 12, 13, 22, 23]: - if not isInt(h[i]): - return 0 - # check iform - iform = int(h[5]) - if iform not in iforms: - return 0 - # check other header values - labrec = int(h[13]) # no. records in file header - labbyt = int(h[22]) # total no. of bytes in header - lenbyt = int(h[23]) # record length in bytes - if labbyt != (labrec * lenbyt): - return 0 - # looks like a valid header - return labbyt - - -def isSpiderImage(filename): - with open(filename, "rb") as fp: - f = fp.read(92) # read 23 * 4 bytes - t = struct.unpack(">23f", f) # try big-endian first - hdrlen = isSpiderHeader(t) - if hdrlen == 0: - t = struct.unpack("<23f", f) # little-endian - hdrlen = isSpiderHeader(t) - return hdrlen - - -class SpiderImageFile(ImageFile.ImageFile): - format = "SPIDER" - format_description = "Spider 2D image" - _close_exclusive_fp_after_loading = False - - def _open(self): - # check header - n = 27 * 4 # read 27 float values - f = self.fp.read(n) - - try: - self.bigendian = 1 - t = struct.unpack(">27f", f) # try big-endian first - hdrlen = isSpiderHeader(t) - if hdrlen == 0: - self.bigendian = 0 - t = struct.unpack("<27f", f) # little-endian - hdrlen = isSpiderHeader(t) - if hdrlen == 0: - msg = "not a valid Spider file" - raise SyntaxError(msg) - except struct.error as e: - msg = "not a valid Spider file" - raise SyntaxError(msg) from e - - h = (99,) + t # add 1 value : spider header index starts at 1 - iform = int(h[5]) - if iform != 1: - msg = "not a Spider 2D image" - raise SyntaxError(msg) - - self._size = int(h[12]), int(h[2]) # size in pixels (width, height) - self.istack = int(h[24]) - self.imgnumber = int(h[27]) - - if self.istack == 0 and self.imgnumber == 0: - # stk=0, img=0: a regular 2D image - offset = hdrlen - self._nimages = 1 - elif self.istack > 0 and self.imgnumber == 0: - # stk>0, img=0: Opening the stack for the first time - self.imgbytes = int(h[12]) * int(h[2]) * 4 - self.hdrlen = hdrlen - self._nimages = int(h[26]) - # Point to the first image in the stack - offset = hdrlen * 2 - self.imgnumber = 1 - elif self.istack == 0 and self.imgnumber > 0: - # stk=0, img>0: an image within the stack - offset = hdrlen + self.stkoffset - self.istack = 2 # So Image knows it's still a stack - else: - msg = "inconsistent stack header values" - raise SyntaxError(msg) - - if self.bigendian: - self.rawmode = "F;32BF" - else: - self.rawmode = "F;32F" - self.mode = "F" - - self.tile = [("raw", (0, 0) + self.size, offset, (self.rawmode, 0, 1))] - self._fp = self.fp # FIXME: hack - - @property - def n_frames(self): - return self._nimages - - @property - def is_animated(self): - return self._nimages > 1 - - # 1st image index is zero (although SPIDER imgnumber starts at 1) - def tell(self): - if self.imgnumber < 1: - return 0 - else: - return self.imgnumber - 1 - - def seek(self, frame): - if self.istack == 0: - msg = "attempt to seek in a non-stack file" - raise EOFError(msg) - if not self._seek_check(frame): - return - self.stkoffset = self.hdrlen + frame * (self.hdrlen + self.imgbytes) - self.fp = self._fp - self.fp.seek(self.stkoffset) - self._open() - - # returns a byte image after rescaling to 0..255 - def convert2byte(self, depth=255): - (minimum, maximum) = self.getextrema() - m = 1 - if maximum != minimum: - m = depth / (maximum - minimum) - b = -m * minimum - return self.point(lambda i, m=m, b=b: i * m + b).convert("L") - - # returns a ImageTk.PhotoImage object, after rescaling to 0..255 - def tkPhotoImage(self): - from PIL import ImageTk - - return ImageTk.PhotoImage(self.convert2byte(), palette=256) - - -# -------------------------------------------------------------------- -# Image series - - -# given a list of filenames, return a list of images -def loadImageSeries(filelist=None): - """create a list of :py:class:`~PIL.Image.Image` objects for use in a montage""" - if filelist is None or len(filelist) < 1: - return - - imglist = [] - for img in filelist: - if not os.path.exists(img): - print(f"unable to find {img}") - continue - try: - with Image.open(img) as im: - im = im.convert2byte() - except Exception: - if not isSpiderImage(img): - print(img + " is not a Spider image file") - continue - im.info["filename"] = img - imglist.append(im) - return imglist - - -# -------------------------------------------------------------------- -# For saving images in Spider format - - -def makeSpiderHeader(im): - nsam, nrow = im.size - lenbyt = nsam * 4 # There are labrec records in the header - labrec = int(1024 / lenbyt) - if 1024 % lenbyt != 0: - labrec += 1 - labbyt = labrec * lenbyt - nvalues = int(labbyt / 4) - if nvalues < 23: - return [] - - hdr = [] - for i in range(nvalues): - hdr.append(0.0) - - # NB these are Fortran indices - hdr[1] = 1.0 # nslice (=1 for an image) - hdr[2] = float(nrow) # number of rows per slice - hdr[3] = float(nrow) # number of records in the image - hdr[5] = 1.0 # iform for 2D image - hdr[12] = float(nsam) # number of pixels per line - hdr[13] = float(labrec) # number of records in file header - hdr[22] = float(labbyt) # total number of bytes in header - hdr[23] = float(lenbyt) # record length in bytes - - # adjust for Fortran indexing - hdr = hdr[1:] - hdr.append(0.0) - # pack binary data into a string - return [struct.pack("f", v) for v in hdr] - - -def _save(im, fp, filename): - if im.mode[0] != "F": - im = im.convert("F") - - hdr = makeSpiderHeader(im) - if len(hdr) < 256: - msg = "Error creating Spider header" - raise OSError(msg) - - # write the SPIDER header - fp.writelines(hdr) - - rawmode = "F;32NF" # 32-bit native floating point - ImageFile._save(im, fp, [("raw", (0, 0) + im.size, 0, (rawmode, 0, 1))]) - - -def _save_spider(im, fp, filename): - # get the filename extension and register it with Image - ext = os.path.splitext(filename)[1] - Image.register_extension(SpiderImageFile.format, ext) - _save(im, fp, filename) - - -# -------------------------------------------------------------------- - - -Image.register_open(SpiderImageFile.format, SpiderImageFile) -Image.register_save(SpiderImageFile.format, _save_spider) - -if __name__ == "__main__": - if len(sys.argv) < 2: - print("Syntax: python3 SpiderImagePlugin.py [infile] [outfile]") - sys.exit() - - filename = sys.argv[1] - if not isSpiderImage(filename): - print("input image must be in Spider format") - sys.exit() - - with Image.open(filename) as im: - print("image: " + str(im)) - print("format: " + str(im.format)) - print("size: " + str(im.size)) - print("mode: " + str(im.mode)) - print("max, min: ", end=" ") - print(im.getextrema()) - - if len(sys.argv) > 2: - outfile = sys.argv[2] - - # perform some image operation - im = im.transpose(Image.Transpose.FLIP_LEFT_RIGHT) - print( - f"saving a flipped version of {os.path.basename(filename)} " - f"as {outfile} " - ) - im.save(outfile, SpiderImageFile.format) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/contourpy/util/__init__.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/contourpy/util/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-368270cd.css b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-368270cd.css deleted file mode 100644 index 3968f6c1a3f8d685db4eabfbc029d0a99dbb5626..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-368270cd.css +++ /dev/null @@ -1 +0,0 @@ -@font-face{font-family:KaTeX_AMS;font-style:normal;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_AMS-Regular-0cdd387c.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_AMS-Regular-30da91e8.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_AMS-Regular-68534840.ttf) format("truetype")}@font-face{font-family:KaTeX_Caligraphic;font-style:normal;font-weight:700;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Caligraphic-Bold-de7701e4.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Caligraphic-Bold-1ae6bd74.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Caligraphic-Bold-07d8e303.ttf) format("truetype")}@font-face{font-family:KaTeX_Caligraphic;font-style:normal;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Caligraphic-Regular-5d53e70a.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Caligraphic-Regular-3398dd02.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Caligraphic-Regular-ed0b7437.ttf) format("truetype")}@font-face{font-family:KaTeX_Fraktur;font-style:normal;font-weight:700;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Fraktur-Bold-74444efd.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Fraktur-Bold-9be7ceb8.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Fraktur-Bold-9163df9c.ttf) format("truetype")}@font-face{font-family:KaTeX_Fraktur;font-style:normal;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Fraktur-Regular-51814d27.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Fraktur-Regular-5e28753b.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Fraktur-Regular-1e6f9579.ttf) format("truetype")}@font-face{font-family:KaTeX_Main;font-style:normal;font-weight:700;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Main-Bold-0f60d1b8.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Main-Bold-c76c5d69.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Main-Bold-138ac28d.ttf) format("truetype")}@font-face{font-family:KaTeX_Main;font-style:italic;font-weight:700;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Main-BoldItalic-99cd42a3.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Main-BoldItalic-a6f7ec0d.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Main-BoldItalic-70ee1f64.ttf) format("truetype")}@font-face{font-family:KaTeX_Main;font-style:italic;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Main-Italic-97479ca6.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Main-Italic-f1d6ef86.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Main-Italic-0d85ae7c.ttf) format("truetype")}@font-face{font-family:KaTeX_Main;font-style:normal;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Main-Regular-c2342cd8.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Main-Regular-c6368d87.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Main-Regular-d0332f52.ttf) format("truetype")}@font-face{font-family:KaTeX_Math;font-style:italic;font-weight:700;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Math-BoldItalic-dc47344d.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Math-BoldItalic-850c0af5.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Math-BoldItalic-f9377ab0.ttf) format("truetype")}@font-face{font-family:KaTeX_Math;font-style:italic;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Math-Italic-7af58c5e.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Math-Italic-8a8d2445.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Math-Italic-08ce98e5.ttf) format("truetype")}@font-face{font-family:KaTeX_SansSerif;font-style:normal;font-weight:700;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_SansSerif-Bold-e99ae511.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_SansSerif-Bold-ece03cfd.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_SansSerif-Bold-1ece03f7.ttf) format("truetype")}@font-face{font-family:KaTeX_SansSerif;font-style:italic;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_SansSerif-Italic-00b26ac8.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_SansSerif-Italic-91ee6750.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_SansSerif-Italic-3931dd81.ttf) format("truetype")}@font-face{font-family:KaTeX_SansSerif;font-style:normal;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_SansSerif-Regular-68e8c73e.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_SansSerif-Regular-11e4dc8a.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_SansSerif-Regular-f36ea897.ttf) format("truetype")}@font-face{font-family:KaTeX_Script;font-style:normal;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Script-Regular-036d4e95.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Script-Regular-d96cdf2b.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Script-Regular-1c67f068.ttf) format("truetype")}@font-face{font-family:KaTeX_Size1;font-style:normal;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Size1-Regular-6b47c401.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Size1-Regular-c943cc98.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Size1-Regular-95b6d2f1.ttf) format("truetype")}@font-face{font-family:KaTeX_Size2;font-style:normal;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Size2-Regular-d04c5421.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Size2-Regular-2014c523.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Size2-Regular-a6b2099f.ttf) format("truetype")}@font-face{font-family:KaTeX_Size3;font-style:normal;font-weight:400;src:url(data:font/woff2;base64,d09GMgABAAAAAA4oAA4AAAAAHbQAAA3TAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAABmAAgRQIDgmcDBEICo1oijYBNgIkA14LMgAEIAWJAAeBHAyBHBvbGiMRdnO0IkRRkiYDgr9KsJ1NUAf2kILNxgUmgqIgq1P89vcbIcmsQbRps3vCcXdYOKSWEPEKgZgQkprQQsxIXUgq0DqpGKmIvrgkeVGtEQD9DzAO29fM9jYhxZEsL2FeURH2JN4MIcTdO049NCVdxQ/w9NrSYFEBKTDKpLKfNkCGDc1RwjZLQcm3vqJ2UW9Xfa3tgAHz6ivp6vgC2yD4/6352ndnN0X0TL7seypkjZlMsjmZnf0Mm5Q+JykRWQBKCVCVPbARPXWyQtb5VgLB6Biq7/Uixcj2WGqdI8tGSgkuRG+t910GKP2D7AQH0DB9FMDW/obJZ8giFI3Wg8Cvevz0M+5m0rTh7XDBlvo9Y4vm13EXmfttwI4mBo1EG15fxJhUiCLbiiyCf/ZA6MFAhg3pGIZGdGIVjtPn6UcMk9A/UUr9PhoNsCENw1APAq0gpH73e+M+0ueyHbabc3vkbcdtzcf/fiy+NxQEjf9ud/ELBHAXJ0nk4z+MXH2Ev/kWyV4k7SkvpPc9Qr38F6RPWnM9cN6DJ0AdD1BhtgABtmoRoFCvPsBAumNm6soZG2Gk5GyVTo2sJncSyp0jQTYoR6WDvTwaaEcHsxHfvuWhHA3a6bN7twRKtcGok6NsCi7jYRrM2jExsUFMxMQYuJbMhuWNOumEJy9hi29Dmg5zMp/A5+hhPG19j1vBrq8JTLr8ki5VLPmG/PynJHVul440bxg5xuymHUFPBshC+nA9I1FmwbRBTNHAcik3Oae0cxKoI3MOriM42UrPe51nsaGxJ+WfXubAsP84aabUlQSJ1IiE0iPETLUU4CATgfXSCSpuRFRmCGbO+wSpAnzaeaCYW1VNEysRtuXCEL1kUFUbbtMv3Tilt/1c11jt3Q5bbMa84cpWipp8Elw3MZhOHsOlwwVUQM3lAR35JiFQbaYCRnMF2lxAWoOg2gyoIV4PouX8HytNIfLhqpJtXB4vjiViUI8IJ7bkC4ikkQvKksnOTKICwnqWSZ9YS5f0WCxmpgjbIq7EJcM4aI2nmhLNY2JIUgOjXZFWBHb+x5oh6cwb0Tv1ackHdKi0I9OO2wE9aogIOn540CCCziyhN+IaejtgAONKznHlHyutPrHGwCx9S6B8kfS4Mfi4Eyv7OU730bT1SCBjt834cXsf43zVjPUqqJjgrjeGnBxSG4aYAKFuVbeCfkDIjAqMb6yLNIbCuvXhMH2/+k2vkNpkORhR59N1CkzoOENvneIosjYmuTxlhUzaGEJQ/iWqx4dmwpmKjrwTiTGTCVozNAYqk/zXOndWxuWSmJkQpJw3pK5KX6QrLt5LATMqpmPAQhkhK6PUjzHUn7E0gHE0kPE0iKkolgkUx9SZmVAdDgpffdyJKg3k7VmzYGCwVXGz/tXmkOIp+vcWs+EMuhhvN0h9uhfzWJziBQmCREGSIFmQIkgVpAnSBRmC//6hkLZwaVhwxlrJSOdqlFtOYxlau9F2QN5Y98xmIAsiM1HVp2VFX+DHHGg6Ecjh3vmqtidX3qHI2qycTk/iwxSt5UzTmEP92ZBnEWTk4Mx8Mpl78ZDokxg/KWb+Q0QkvdKVmq3TMW+RXEgrsziSAfNXFMhDc60N5N9jQzjfO0kBKpUZl0ZmwJ41j/B9Hz6wmRaJB84niNmQrzp9eSlQCDDzazGDdVi3P36VZQ+Jy4f9UBNp+3zTjqI4abaFAm+GShVaXlsGdF3FYzZcDI6cori4kMxUECl9IjJZpzkvitAoxKue+90pDMvcKRxLl53TmOKCmV/xRolNKSqqUxc6LStOETmFOiLZZptlZepcKiAzteG8PEdpnQpbOMNcMsR4RR2Bs0cKFEvSmIjAFcnarqwUL4lDhHmnVkwu1IwshbiCcgvOheZuYyOteufZZwlcTlLgnZ3o/WcYdzZHW/WGaqaVfmTZ1aWCceJjkbZqsfbkOtcFlUZM/jy+hXHDbaUobWqqXaeWobbLO99yG5N3U4wxco0rQGGcOLASFMXeJoham8M+/x6O2WywK2l4HGbq1CoUyC/IZikQhdq3SiuNrvAEj0AVu9x2x3lp/xWzahaxidezFVtdcb5uEnzyl0ZmYiuKI0exvCd4Xc9CV1KB0db00z92wDPde0kukbvZIWN6jUWFTmPIC/Y4UPCm8UfDTFZpZNon1qLFTkBhxzB+FjQRA2Q/YRJT8pQigslMaUpFyAG8TMlXigiqmAZX4xgijKjRlGpLE0GdplRfCaJo0JQaSxNBk6ZmMzcya0FmrcisDdn0Q3HI2sWSppYigmlM1XT/kLQZSNpMJG0WkjYbSZuDpM1F0uYhFc1HxU4m1QJjDK6iL0S5uSj5rgXc3RejEigtcRBtqYPQsiTskmO5vosV+q4VGIKbOkDg0jtRrq+Em1YloaTFar3EGr1EUC8R0kus1Uus00usL97ABr2BjXoDm/QGNhuWtMVBKOwg/i78lT7hBsAvDmwHc/ao3vmUbBmhjeYySZNWvGkfZAgISDSaDo1SVpzGDsAEkF8B+gEapViUoZgUWXcRIGFZNm6gWbAKk0bp0k1MHG9fLYtV4iS2SmLEQFARzRcnf9PUS0LVn05/J9MiRRBU3v2IrvW974v4N00L7ZMk0wXP1409CHo/an8zTRHD3eSJ6m8D4YMkZNl3M79sqeuAsr/m3f+8/yl7A50aiAEJgeBeMWzu7ui9UfUBCe2TIqZIoOd/3/udRBOQidQZUERzb2/VwZN1H/Sju82ew2H2Wfr6qvfVf3hqwDvAIpkQVFy4B9Pe9e4/XvPeceu7h3dvO56iJPf0+A6cqA2ip18ER+iFgggiuOkvj24bby0N9j2UHIkgqIt+sVgfodC4YghLSMjSZbH0VR/6dMDrYJeKHilKTemt6v6kvzvn3/RrdWtr0GoN/xL+Sex/cPYLUpepx9cz/D46UPU5KXgAQa+NDps1v6J3xP1i2HtaDB0M9aX2deA7SYff//+gUCovMmIK/qfsFcOk+4Y5ZN97XlG6zebqtMbKgeRFi51vnxTQYBUik2rS/Cn6PC8ADR8FGxsRPB82dzfND90gIcshOcYUkfjherBz53odpm6TP8txlwOZ71xmfHHOvq053qFF/MRlS3jP0ELudrf2OeN8DHvp6ZceLe8qKYvWz/7yp0u4dKPfli3CYq0O13Ih71mylJ80tOi10On8wi+F4+LWgDPeJ30msSQt9/vkmHq9/Lvo2b461mP801v3W4xTcs6CbvF9UDdrSt+A8OUbpSh55qAUFXWznBBfdeJ8a4d7ugT5tvxUza3h9m4H7ptTqiG4z0g5dc0X29OcGlhpGFMpQo9ytTS+NViZpNdvU4kWx+LKxNY10kQ1yqGXrhe4/1nvP7E+nd5A92TtaRplbHSqoIdOqtRWti+fkB5/n1+/VvCmz12pG1kpQWsfi1ftlBobm0bpngs16CHkbIwdLnParxtTV3QYRlfJ0KFskH7pdN/YDn+yRuSd7sNH3aO0DYPggk6uWuXrfOc+fa3VTxFVvKaNxHsiHmsXyCLIE5yuOeN3/Jdf8HBL/5M6shjyhxHx9BjB1O0+4NLOnjLLSxwO7ukN4jMbOIcD879KLSi6Pk61Oqm2377n8079PXEEQ7cy7OKEC9nbpet118fxweTafpt69x/Bt8UqGzNQt7aelpc44dn5cqhwf71+qKp/Zf/+a0zcizOUWpl/iBcSXip0pplkatCchoH5c5aUM8I7/dWxAej8WicPL1URFZ9BDJelUwEwTkGqUhgSlydVes95YdXvhh9Gfz/aeFWvgVb4tuLbcv4+wLdutVZv/cUonwBD/6eDlE0aSiKK/uoH3+J1wDE/jMVqY2ysGufN84oIXB0sPzy8ollX/LegY74DgJXJR57sn+VGza0x3DnuIgABFM15LmajjjsNlYj+JEZGbuRYcAMOWxFkPN2w6Wd46xo4gVWQR/X4lyI/R6K/YK0110GzudPRW7Y+UOBGTfNNzHeYT0fiH0taunBpq9HEW8OKSaBGj21L0MqenEmNRWBAWDWAk4CpNoEZJ2tTaPFgbQYj8HxtFilErs3BTRwT8uO1NXQaWfIotchmPkAF5mMBAliEmZiOGVgCG9LgRzpscMAOOwowlT3JhusdazXGSC/hxR3UlmWVwWHpOIKheqONvjyhSiTHIkVUco5bnji8m//zL7PKaT1Vl5I6UE609f+gkr6MZKVyKc7zJRmCahLsdlyA5fdQkRSan9LgnnLEyGSkaKJCJog0wAgvepWBt80+1yKln1bMVtCljfNWDueKLsWwaEbBSfSPTEmVRsUcYYMnEjcjeyCZzBXK9E9BYBXLKjOSpUDR+nEV3TFSUdQaz+ot98QxgXwx0GQ+EEUAKB2qZPkQQ0GqFD8UPFMqyaCHM24BZmSGic9EYMagKizOw9Hz50DMrDLrqqLkTAhplMictiCAx5S3BIUQdeJeLnBy2CNtMfz6cV4u8XKoFZQesbf9YZiIERiHjaNodDW6LgcirX/mPnJIkBGDUpTBhSa0EIr38D5hCIszhCM8URGBqImoWjpvpt1ebu/v3Gl3qJfMnNM+9V+kiRFyROTPHQWOcs1dNW94/ukKMPZBvDi55i5CttdeJz84DLngLqjcdwEZ87bFFR8CIG35OAkDVN6VRDZ7aq67NteYqZ2lpT8oYB2CytoBd6VuAx4WgiAsnuj3WohG+LugzXiQRDeM3XYXlULv4dp5VFYC) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Size3-Regular-6ab6b62e.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Size3-Regular-500e04d5.ttf) format("truetype")}@font-face{font-family:KaTeX_Size4;font-style:normal;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Size4-Regular-a4af7d41.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Size4-Regular-99f9c675.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Size4-Regular-c647367d.ttf) format("truetype")}@font-face{font-family:KaTeX_Typewriter;font-style:normal;font-weight:400;src:url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Typewriter-Regular-71d517d6.woff2) format("woff2"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Typewriter-Regular-e14fed02.woff) format("woff"),url(https://gradio.s3-us-west-2.amazonaws.com/3.33.1/assets/KaTeX_Typewriter-Regular-f01f3e87.ttf) format("truetype")}.gradio-container-3-33-1 .katex{text-rendering:auto;font: 1.21em KaTeX_Main,Times New Roman,serif;line-height:1.2;text-indent:0}.gradio-container-3-33-1 .katex *{-ms-high-contrast-adjust:none!important;border-color:currentColor}.gradio-container-3-33-1 .katex .katex-version:after{content:"0.16.7"}.gradio-container-3-33-1 .katex .katex-mathml{clip:rect(1px,1px,1px,1px);border:0;height:1px;overflow:hidden;padding:0;position:absolute;width:1px}.gradio-container-3-33-1 .katex .katex-html>.newline{display:block}.gradio-container-3-33-1 .katex .base{position:relative;white-space:nowrap;width:-webkit-min-content;width:-moz-min-content;width:min-content}.gradio-container-3-33-1 .katex .base,.gradio-container-3-33-1 .katex .strut{display:inline-block}.gradio-container-3-33-1 .katex .textbf{font-weight:700}.gradio-container-3-33-1 .katex .textit{font-style:italic}.gradio-container-3-33-1 .katex .textrm{font-family:KaTeX_Main}.gradio-container-3-33-1 .katex .textsf{font-family:KaTeX_SansSerif}.gradio-container-3-33-1 .katex .texttt{font-family:KaTeX_Typewriter}.gradio-container-3-33-1 .katex .mathnormal{font-family:KaTeX_Math;font-style:italic}.gradio-container-3-33-1 .katex .mathit{font-family:KaTeX_Main;font-style:italic}.gradio-container-3-33-1 .katex .mathrm{font-style:normal}.gradio-container-3-33-1 .katex .mathbf{font-family:KaTeX_Main;font-weight:700}.gradio-container-3-33-1 .katex .boldsymbol{font-family:KaTeX_Math;font-style:italic;font-weight:700}.gradio-container-3-33-1 .katex .amsrm,.gradio-container-3-33-1 .katex .mathbb,.gradio-container-3-33-1 .katex .textbb{font-family:KaTeX_AMS}.gradio-container-3-33-1 .katex .mathcal{font-family:KaTeX_Caligraphic}.gradio-container-3-33-1 .katex .mathfrak,.gradio-container-3-33-1 .katex .textfrak{font-family:KaTeX_Fraktur}.gradio-container-3-33-1 .katex .mathtt{font-family:KaTeX_Typewriter}.gradio-container-3-33-1 .katex .mathscr,.gradio-container-3-33-1 .katex .textscr{font-family:KaTeX_Script}.gradio-container-3-33-1 .katex .mathsf,.gradio-container-3-33-1 .katex .textsf{font-family:KaTeX_SansSerif}.gradio-container-3-33-1 .katex .mathboldsf,.gradio-container-3-33-1 .katex .textboldsf{font-family:KaTeX_SansSerif;font-weight:700}.gradio-container-3-33-1 .katex .mathitsf,.gradio-container-3-33-1 .katex .textitsf{font-family:KaTeX_SansSerif;font-style:italic}.gradio-container-3-33-1 .katex .mainrm{font-family:KaTeX_Main;font-style:normal}.gradio-container-3-33-1 .katex .vlist-t{border-collapse:collapse;display:inline-table;table-layout:fixed}.gradio-container-3-33-1 .katex .vlist-r{display:table-row}.gradio-container-3-33-1 .katex .vlist{display:table-cell;position:relative;vertical-align:bottom}.gradio-container-3-33-1 .katex .vlist>span{display:block;height:0;position:relative}.gradio-container-3-33-1 .katex .vlist>span>span{display:inline-block}.gradio-container-3-33-1 .katex .vlist>span>.pstrut{overflow:hidden;width:0}.gradio-container-3-33-1 .katex .vlist-t2{margin-right:-2px}.gradio-container-3-33-1 .katex .vlist-s{display:table-cell;font-size:1px;min-width:2px;vertical-align:bottom;width:2px}.gradio-container-3-33-1 .katex .vbox{align-items:baseline;display:inline-flex;flex-direction:column}.gradio-container-3-33-1 .katex .hbox{width:100%}.gradio-container-3-33-1 .katex .hbox,.gradio-container-3-33-1 .katex .thinbox{display:inline-flex;flex-direction:row}.gradio-container-3-33-1 .katex .thinbox{max-width:0;width:0}.gradio-container-3-33-1 .katex .msupsub{text-align:left}.gradio-container-3-33-1 .katex .mfrac>span>span{text-align:center}.gradio-container-3-33-1 .katex .mfrac .frac-line{border-bottom-style:solid;display:inline-block;width:100%}.gradio-container-3-33-1 .katex .hdashline,.gradio-container-3-33-1 .katex .hline,.gradio-container-3-33-1 .katex .mfrac .frac-line,.gradio-container-3-33-1 .katex .overline .overline-line,.gradio-container-3-33-1 .katex .rule,.gradio-container-3-33-1 .katex .underline .underline-line{min-height:1px}.gradio-container-3-33-1 .katex .mspace{display:inline-block}.gradio-container-3-33-1 .katex .clap,.gradio-container-3-33-1 .katex .llap,.gradio-container-3-33-1 .katex .rlap{position:relative;width:0}.gradio-container-3-33-1 .katex .clap>.inner,.gradio-container-3-33-1 .katex .llap>.inner,.gradio-container-3-33-1 .katex .rlap>.inner{position:absolute}.gradio-container-3-33-1 .katex .clap>.fix,.gradio-container-3-33-1 .katex .llap>.fix,.gradio-container-3-33-1 .katex .rlap>.fix{display:inline-block}.gradio-container-3-33-1 .katex .llap>.inner{right:0}.gradio-container-3-33-1 .katex .clap>.inner,.gradio-container-3-33-1 .katex .rlap>.inner{left:0}.gradio-container-3-33-1 .katex .clap>.inner>span{margin-left:-50%;margin-right:50%}.gradio-container-3-33-1 .katex .rule{border:0 solid;display:inline-block;position:relative}.gradio-container-3-33-1 .katex .hline,.gradio-container-3-33-1 .katex .overline .overline-line,.gradio-container-3-33-1 .katex .underline .underline-line{border-bottom-style:solid;display:inline-block;width:100%}.gradio-container-3-33-1 .katex .hdashline{border-bottom-style:dashed;display:inline-block;width:100%}.gradio-container-3-33-1 .katex .sqrt>.root{margin-left:.27777778em;margin-right:-.55555556em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size1.size1,.gradio-container-3-33-1 .katex .sizing.reset-size1.size1{font-size:1em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size1.size2,.gradio-container-3-33-1 .katex .sizing.reset-size1.size2{font-size:1.2em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size1.size3,.gradio-container-3-33-1 .katex .sizing.reset-size1.size3{font-size:1.4em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size1.size4,.gradio-container-3-33-1 .katex .sizing.reset-size1.size4{font-size:1.6em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size1.size5,.gradio-container-3-33-1 .katex .sizing.reset-size1.size5{font-size:1.8em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size1.size6,.gradio-container-3-33-1 .katex .sizing.reset-size1.size6{font-size:2em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size1.size7,.gradio-container-3-33-1 .katex .sizing.reset-size1.size7{font-size:2.4em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size1.size8,.gradio-container-3-33-1 .katex .sizing.reset-size1.size8{font-size:2.88em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size1.size9,.gradio-container-3-33-1 .katex .sizing.reset-size1.size9{font-size:3.456em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size1.size10,.gradio-container-3-33-1 .katex .sizing.reset-size1.size10{font-size:4.148em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size1.size11,.gradio-container-3-33-1 .katex .sizing.reset-size1.size11{font-size:4.976em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size2.size1,.gradio-container-3-33-1 .katex .sizing.reset-size2.size1{font-size:.83333333em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size2.size2,.gradio-container-3-33-1 .katex .sizing.reset-size2.size2{font-size:1em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size2.size3,.gradio-container-3-33-1 .katex .sizing.reset-size2.size3{font-size:1.16666667em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size2.size4,.gradio-container-3-33-1 .katex .sizing.reset-size2.size4{font-size:1.33333333em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size2.size5,.gradio-container-3-33-1 .katex .sizing.reset-size2.size5{font-size:1.5em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size2.size6,.gradio-container-3-33-1 .katex .sizing.reset-size2.size6{font-size:1.66666667em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size2.size7,.gradio-container-3-33-1 .katex .sizing.reset-size2.size7{font-size:2em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size2.size8,.gradio-container-3-33-1 .katex .sizing.reset-size2.size8{font-size:2.4em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size2.size9,.gradio-container-3-33-1 .katex .sizing.reset-size2.size9{font-size:2.88em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size2.size10,.gradio-container-3-33-1 .katex .sizing.reset-size2.size10{font-size:3.45666667em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size2.size11,.gradio-container-3-33-1 .katex .sizing.reset-size2.size11{font-size:4.14666667em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size3.size1,.gradio-container-3-33-1 .katex .sizing.reset-size3.size1{font-size:.71428571em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size3.size2,.gradio-container-3-33-1 .katex .sizing.reset-size3.size2{font-size:.85714286em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size3.size3,.gradio-container-3-33-1 .katex .sizing.reset-size3.size3{font-size:1em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size3.size4,.gradio-container-3-33-1 .katex .sizing.reset-size3.size4{font-size:1.14285714em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size3.size5,.gradio-container-3-33-1 .katex .sizing.reset-size3.size5{font-size:1.28571429em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size3.size6,.gradio-container-3-33-1 .katex .sizing.reset-size3.size6{font-size:1.42857143em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size3.size7,.gradio-container-3-33-1 .katex .sizing.reset-size3.size7{font-size:1.71428571em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size3.size8,.gradio-container-3-33-1 .katex .sizing.reset-size3.size8{font-size:2.05714286em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size3.size9,.gradio-container-3-33-1 .katex .sizing.reset-size3.size9{font-size:2.46857143em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size3.size10,.gradio-container-3-33-1 .katex .sizing.reset-size3.size10{font-size:2.96285714em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size3.size11,.gradio-container-3-33-1 .katex .sizing.reset-size3.size11{font-size:3.55428571em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size4.size1,.gradio-container-3-33-1 .katex .sizing.reset-size4.size1{font-size:.625em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size4.size2,.gradio-container-3-33-1 .katex .sizing.reset-size4.size2{font-size:.75em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size4.size3,.gradio-container-3-33-1 .katex .sizing.reset-size4.size3{font-size:.875em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size4.size4,.gradio-container-3-33-1 .katex .sizing.reset-size4.size4{font-size:1em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size4.size5,.gradio-container-3-33-1 .katex .sizing.reset-size4.size5{font-size:1.125em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size4.size6,.gradio-container-3-33-1 .katex .sizing.reset-size4.size6{font-size:1.25em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size4.size7,.gradio-container-3-33-1 .katex .sizing.reset-size4.size7{font-size:1.5em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size4.size8,.gradio-container-3-33-1 .katex .sizing.reset-size4.size8{font-size:1.8em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size4.size9,.gradio-container-3-33-1 .katex .sizing.reset-size4.size9{font-size:2.16em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size4.size10,.gradio-container-3-33-1 .katex .sizing.reset-size4.size10{font-size:2.5925em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size4.size11,.gradio-container-3-33-1 .katex .sizing.reset-size4.size11{font-size:3.11em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size5.size1,.gradio-container-3-33-1 .katex .sizing.reset-size5.size1{font-size:.55555556em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size5.size2,.gradio-container-3-33-1 .katex .sizing.reset-size5.size2{font-size:.66666667em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size5.size3,.gradio-container-3-33-1 .katex .sizing.reset-size5.size3{font-size:.77777778em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size5.size4,.gradio-container-3-33-1 .katex .sizing.reset-size5.size4{font-size:.88888889em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size5.size5,.gradio-container-3-33-1 .katex .sizing.reset-size5.size5{font-size:1em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size5.size6,.gradio-container-3-33-1 .katex .sizing.reset-size5.size6{font-size:1.11111111em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size5.size7,.gradio-container-3-33-1 .katex .sizing.reset-size5.size7{font-size:1.33333333em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size5.size8,.gradio-container-3-33-1 .katex .sizing.reset-size5.size8{font-size:1.6em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size5.size9,.gradio-container-3-33-1 .katex .sizing.reset-size5.size9{font-size:1.92em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size5.size10,.gradio-container-3-33-1 .katex .sizing.reset-size5.size10{font-size:2.30444444em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size5.size11,.gradio-container-3-33-1 .katex .sizing.reset-size5.size11{font-size:2.76444444em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size6.size1,.gradio-container-3-33-1 .katex .sizing.reset-size6.size1{font-size:.5em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size6.size2,.gradio-container-3-33-1 .katex .sizing.reset-size6.size2{font-size:.6em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size6.size3,.gradio-container-3-33-1 .katex .sizing.reset-size6.size3{font-size:.7em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size6.size4,.gradio-container-3-33-1 .katex .sizing.reset-size6.size4{font-size:.8em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size6.size5,.gradio-container-3-33-1 .katex .sizing.reset-size6.size5{font-size:.9em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size6.size6,.gradio-container-3-33-1 .katex .sizing.reset-size6.size6{font-size:1em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size6.size7,.gradio-container-3-33-1 .katex .sizing.reset-size6.size7{font-size:1.2em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size6.size8,.gradio-container-3-33-1 .katex .sizing.reset-size6.size8{font-size:1.44em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size6.size9,.gradio-container-3-33-1 .katex .sizing.reset-size6.size9{font-size:1.728em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size6.size10,.gradio-container-3-33-1 .katex .sizing.reset-size6.size10{font-size:2.074em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size6.size11,.gradio-container-3-33-1 .katex .sizing.reset-size6.size11{font-size:2.488em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size7.size1,.gradio-container-3-33-1 .katex .sizing.reset-size7.size1{font-size:.41666667em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size7.size2,.gradio-container-3-33-1 .katex .sizing.reset-size7.size2{font-size:.5em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size7.size3,.gradio-container-3-33-1 .katex .sizing.reset-size7.size3{font-size:.58333333em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size7.size4,.gradio-container-3-33-1 .katex .sizing.reset-size7.size4{font-size:.66666667em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size7.size5,.gradio-container-3-33-1 .katex .sizing.reset-size7.size5{font-size:.75em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size7.size6,.gradio-container-3-33-1 .katex .sizing.reset-size7.size6{font-size:.83333333em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size7.size7,.gradio-container-3-33-1 .katex .sizing.reset-size7.size7{font-size:1em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size7.size8,.gradio-container-3-33-1 .katex .sizing.reset-size7.size8{font-size:1.2em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size7.size9,.gradio-container-3-33-1 .katex .sizing.reset-size7.size9{font-size:1.44em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size7.size10,.gradio-container-3-33-1 .katex .sizing.reset-size7.size10{font-size:1.72833333em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size7.size11,.gradio-container-3-33-1 .katex .sizing.reset-size7.size11{font-size:2.07333333em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size8.size1,.gradio-container-3-33-1 .katex .sizing.reset-size8.size1{font-size:.34722222em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size8.size2,.gradio-container-3-33-1 .katex .sizing.reset-size8.size2{font-size:.41666667em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size8.size3,.gradio-container-3-33-1 .katex .sizing.reset-size8.size3{font-size:.48611111em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size8.size4,.gradio-container-3-33-1 .katex .sizing.reset-size8.size4{font-size:.55555556em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size8.size5,.gradio-container-3-33-1 .katex .sizing.reset-size8.size5{font-size:.625em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size8.size6,.gradio-container-3-33-1 .katex .sizing.reset-size8.size6{font-size:.69444444em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size8.size7,.gradio-container-3-33-1 .katex .sizing.reset-size8.size7{font-size:.83333333em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size8.size8,.gradio-container-3-33-1 .katex .sizing.reset-size8.size8{font-size:1em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size8.size9,.gradio-container-3-33-1 .katex .sizing.reset-size8.size9{font-size:1.2em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size8.size10,.gradio-container-3-33-1 .katex .sizing.reset-size8.size10{font-size:1.44027778em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size8.size11,.gradio-container-3-33-1 .katex .sizing.reset-size8.size11{font-size:1.72777778em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size9.size1,.gradio-container-3-33-1 .katex .sizing.reset-size9.size1{font-size:.28935185em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size9.size2,.gradio-container-3-33-1 .katex .sizing.reset-size9.size2{font-size:.34722222em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size9.size3,.gradio-container-3-33-1 .katex .sizing.reset-size9.size3{font-size:.40509259em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size9.size4,.gradio-container-3-33-1 .katex .sizing.reset-size9.size4{font-size:.46296296em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size9.size5,.gradio-container-3-33-1 .katex .sizing.reset-size9.size5{font-size:.52083333em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size9.size6,.gradio-container-3-33-1 .katex .sizing.reset-size9.size6{font-size:.5787037em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size9.size7,.gradio-container-3-33-1 .katex .sizing.reset-size9.size7{font-size:.69444444em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size9.size8,.gradio-container-3-33-1 .katex .sizing.reset-size9.size8{font-size:.83333333em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size9.size9,.gradio-container-3-33-1 .katex .sizing.reset-size9.size9{font-size:1em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size9.size10,.gradio-container-3-33-1 .katex .sizing.reset-size9.size10{font-size:1.20023148em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size9.size11,.gradio-container-3-33-1 .katex .sizing.reset-size9.size11{font-size:1.43981481em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size10.size1,.gradio-container-3-33-1 .katex .sizing.reset-size10.size1{font-size:.24108004em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size10.size2,.gradio-container-3-33-1 .katex .sizing.reset-size10.size2{font-size:.28929605em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size10.size3,.gradio-container-3-33-1 .katex .sizing.reset-size10.size3{font-size:.33751205em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size10.size4,.gradio-container-3-33-1 .katex .sizing.reset-size10.size4{font-size:.38572806em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size10.size5,.gradio-container-3-33-1 .katex .sizing.reset-size10.size5{font-size:.43394407em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size10.size6,.gradio-container-3-33-1 .katex .sizing.reset-size10.size6{font-size:.48216008em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size10.size7,.gradio-container-3-33-1 .katex .sizing.reset-size10.size7{font-size:.57859209em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size10.size8,.gradio-container-3-33-1 .katex .sizing.reset-size10.size8{font-size:.69431051em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size10.size9,.gradio-container-3-33-1 .katex .sizing.reset-size10.size9{font-size:.83317261em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size10.size10,.gradio-container-3-33-1 .katex .sizing.reset-size10.size10{font-size:1em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size10.size11,.gradio-container-3-33-1 .katex .sizing.reset-size10.size11{font-size:1.19961427em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size11.size1,.gradio-container-3-33-1 .katex .sizing.reset-size11.size1{font-size:.20096463em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size11.size2,.gradio-container-3-33-1 .katex .sizing.reset-size11.size2{font-size:.24115756em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size11.size3,.gradio-container-3-33-1 .katex .sizing.reset-size11.size3{font-size:.28135048em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size11.size4,.gradio-container-3-33-1 .katex .sizing.reset-size11.size4{font-size:.32154341em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size11.size5,.gradio-container-3-33-1 .katex .sizing.reset-size11.size5{font-size:.36173633em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size11.size6,.gradio-container-3-33-1 .katex .sizing.reset-size11.size6{font-size:.40192926em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size11.size7,.gradio-container-3-33-1 .katex .sizing.reset-size11.size7{font-size:.48231511em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size11.size8,.gradio-container-3-33-1 .katex .sizing.reset-size11.size8{font-size:.57877814em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size11.size9,.gradio-container-3-33-1 .katex .sizing.reset-size11.size9{font-size:.69453376em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size11.size10,.gradio-container-3-33-1 .katex .sizing.reset-size11.size10{font-size:.83360129em}.gradio-container-3-33-1 .katex .fontsize-ensurer.reset-size11.size11,.gradio-container-3-33-1 .katex .sizing.reset-size11.size11{font-size:1em}.gradio-container-3-33-1 .katex .delimsizing.size1{font-family:KaTeX_Size1}.gradio-container-3-33-1 .katex .delimsizing.size2{font-family:KaTeX_Size2}.gradio-container-3-33-1 .katex .delimsizing.size3{font-family:KaTeX_Size3}.gradio-container-3-33-1 .katex .delimsizing.size4{font-family:KaTeX_Size4}.gradio-container-3-33-1 .katex .delimsizing.mult .delim-size1>span{font-family:KaTeX_Size1}.gradio-container-3-33-1 .katex .delimsizing.mult .delim-size4>span{font-family:KaTeX_Size4}.gradio-container-3-33-1 .katex .nulldelimiter{display:inline-block;width:.12em}.gradio-container-3-33-1 .katex .delimcenter,.gradio-container-3-33-1 .katex .op-symbol{position:relative}.gradio-container-3-33-1 .katex .op-symbol.small-op{font-family:KaTeX_Size1}.gradio-container-3-33-1 .katex .op-symbol.large-op{font-family:KaTeX_Size2}.gradio-container-3-33-1 .katex .accent>.vlist-t,.gradio-container-3-33-1 .katex .op-limits>.vlist-t{text-align:center}.gradio-container-3-33-1 .katex .accent .accent-body{position:relative}.gradio-container-3-33-1 .katex .accent .accent-body:not(.accent-full){width:0}.gradio-container-3-33-1 .katex .overlay{display:block}.gradio-container-3-33-1 .katex .mtable .vertical-separator{display:inline-block;min-width:1px}.gradio-container-3-33-1 .katex .mtable .arraycolsep{display:inline-block}.gradio-container-3-33-1 .katex .mtable .col-align-c>.vlist-t{text-align:center}.gradio-container-3-33-1 .katex .mtable .col-align-l>.vlist-t{text-align:left}.gradio-container-3-33-1 .katex .mtable .col-align-r>.vlist-t{text-align:right}.gradio-container-3-33-1 .katex .svg-align{text-align:left}.gradio-container-3-33-1 .katex svg{fill:currentColor;stroke:currentColor;fill-rule:nonzero;fill-opacity:1;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;display:block;height:inherit;position:absolute;width:100%}.gradio-container-3-33-1 .katex svg path{stroke:none}.gradio-container-3-33-1 .katex img{border-style:none;max-height:none;max-width:none;min-height:0;min-width:0}.gradio-container-3-33-1 .katex .stretchy{display:block;overflow:hidden;position:relative;width:100%}.gradio-container-3-33-1 .katex .stretchy:after,.gradio-container-3-33-1 .katex .stretchy:before{content:""}.gradio-container-3-33-1 .katex .hide-tail{overflow:hidden;position:relative;width:100%}.gradio-container-3-33-1 .katex .halfarrow-left{left:0;overflow:hidden;position:absolute;width:50.2%}.gradio-container-3-33-1 .katex .halfarrow-right{overflow:hidden;position:absolute;right:0;width:50.2%}.gradio-container-3-33-1 .katex .brace-left{left:0;overflow:hidden;position:absolute;width:25.1%}.gradio-container-3-33-1 .katex .brace-center{left:25%;overflow:hidden;position:absolute;width:50%}.gradio-container-3-33-1 .katex .brace-right{overflow:hidden;position:absolute;right:0;width:25.1%}.gradio-container-3-33-1 .katex .x-arrow-pad{padding:0 .5em}.gradio-container-3-33-1 .katex .cd-arrow-pad{padding:0 .55556em 0 .27778em}.gradio-container-3-33-1 .katex .mover,.gradio-container-3-33-1 .katex .munder,.gradio-container-3-33-1 .katex .x-arrow{text-align:center}.gradio-container-3-33-1 .katex .boxpad{padding:0 .3em}.gradio-container-3-33-1 .katex .fbox,.gradio-container-3-33-1 .katex .fcolorbox{border:.04em solid;box-sizing:border-box}.gradio-container-3-33-1 .katex .cancel-pad{padding:0 .2em}.gradio-container-3-33-1 .katex .cancel-lap{margin-left:-.2em;margin-right:-.2em}.gradio-container-3-33-1 .katex .sout{border-bottom-style:solid;border-bottom-width:.08em}.gradio-container-3-33-1 .katex .angl{border-right:.049em solid;border-top:.049em solid;box-sizing:border-box;margin-right:.03889em}.gradio-container-3-33-1 .katex .anglpad{padding:0 .03889em}.gradio-container-3-33-1 .katex .eqn-num:before{content:"(" counter(katexEqnNo) ")";counter-increment:katexEqnNo}.gradio-container-3-33-1 .katex .mml-eqn-num:before{content:"(" counter(mmlEqnNo) ")";counter-increment:mmlEqnNo}.gradio-container-3-33-1 .katex .mtr-glue{width:50%}.gradio-container-3-33-1 .katex .cd-vert-arrow{display:inline-block;position:relative}.gradio-container-3-33-1 .katex .cd-label-left{display:inline-block;position:absolute;right:calc(50% + .3em);text-align:left}.gradio-container-3-33-1 .katex .cd-label-right{display:inline-block;left:calc(50% + .3em);position:absolute;text-align:right}.gradio-container-3-33-1 .katex-display{display:block;margin:1em 0;text-align:center}.gradio-container-3-33-1 .katex-display>.katex{display:block;text-align:center;white-space:nowrap}.gradio-container-3-33-1 .katex-display>.katex>.katex-html{display:block;position:relative}.gradio-container-3-33-1 .katex-display>.katex>.katex-html>.tag{position:absolute;right:0}.gradio-container-3-33-1 .katex-display.leqno>.katex>.katex-html>.tag{left:0;right:auto}.gradio-container-3-33-1 .katex-display.fleqn>.katex{padding-left:2em;text-align:left}.gradio-container-3-33-1 body{counter-reset:katexEqnNo mmlEqnNo}.wrap.svelte-17nzccn.svelte-17nzccn{padding:var(--block-padding);height:100%;max-height:480px;overflow-y:auto}.message-wrap.svelte-17nzccn.svelte-17nzccn{display:flex;flex-direction:column;gap:var(--spacing-xxl)}.message-wrap.svelte-17nzccn>div.svelte-17nzccn img{border-radius:13px;max-width:30vw}.message-wrap.svelte-17nzccn audio{width:100%}.message.svelte-17nzccn.svelte-17nzccn{position:relative;align-self:flex-start;border-width:1px;border-radius:var(--radius-xxl);background:var(--background-fill-secondary);padding:var(--spacing-xxl);width:calc(100% - var(--spacing-xxl));color:var(--body-text-color);font-size:var(--text-lg);line-height:var(--line-lg);overflow-wrap:break-word}.user.svelte-17nzccn.svelte-17nzccn{align-self:flex-end;border-bottom-right-radius:0}.bot.svelte-17nzccn.svelte-17nzccn{border-bottom-left-radius:0;padding-left:calc(2 * var(--spacing-xxl))}@media (max-width: 480px){.message.svelte-17nzccn.svelte-17nzccn{width:auto}.bot.svelte-17nzccn.svelte-17nzccn{padding-left:var(--spacing-xxl)}}.bot.svelte-17nzccn.svelte-17nzccn,.pending.svelte-17nzccn.svelte-17nzccn{border-color:var(--border-color-primary);background:var(--background-fill-secondary)}.user.svelte-17nzccn.svelte-17nzccn{border-color:var(--border-color-accent);background-color:var(--color-accent-soft)}.feedback.svelte-17nzccn.svelte-17nzccn{display:flex;position:absolute;top:var(--spacing-xl);right:calc(var(--spacing-xxl) + var(--spacing-xl));gap:var(--spacing-lg);font-size:var(--text-sm)}.feedback.svelte-17nzccn button.svelte-17nzccn{color:var(--body-text-color-subdued)}.feedback.svelte-17nzccn button.svelte-17nzccn:hover{color:var(--body-text-color)}.selectable.svelte-17nzccn.svelte-17nzccn{cursor:pointer}.pending.svelte-17nzccn.svelte-17nzccn{display:flex;justify-content:center;align-items:center;align-self:center;gap:2px}.dot-flashing.svelte-17nzccn.svelte-17nzccn{animation:svelte-17nzccn-dot-flashing 1s infinite linear alternate;border-radius:5px;background-color:var(--body-text-color);width:5px;height:5px;color:var(--body-text-color)}.dot-flashing.svelte-17nzccn.svelte-17nzccn:nth-child(2){animation-delay:.33s}.dot-flashing.svelte-17nzccn.svelte-17nzccn:nth-child(3){animation-delay:.66s}@media (max-width: 480px){.user.svelte-17nzccn.svelte-17nzccn{align-self:flex-end}.bot.svelte-17nzccn.svelte-17nzccn{align-self:flex-start;padding-left:var(--size-3)}}@keyframes svelte-17nzccn-dot-flashing{0%{opacity:.8}50%{opacity:.5}to{opacity:.8}}.message-wrap.svelte-17nzccn .message.svelte-17nzccn img{margin:var(--size-2);max-height:200px}.message-wrap.svelte-17nzccn .message.svelte-17nzccn a{color:var(--color-text-link);text-decoration:underline}.hide.svelte-17nzccn.svelte-17nzccn{display:none}.message-wrap.svelte-17nzccn pre[class*=language-],.message-wrap.svelte-17nzccn pre{margin-top:var(--spacing-sm);margin-bottom:var(--spacing-sm);box-shadow:none;border:none;border-radius:var(--radius-md);background-color:var(--chatbot-code-background-color);padding:var(--spacing-xl) 10px}.message-wrap.svelte-17nzccn table,.message-wrap.svelte-17nzccn tr,.message-wrap.svelte-17nzccn td,.message-wrap.svelte-17nzccn th{margin-top:var(--spacing-sm);margin-bottom:var(--spacing-sm);padding:var(--spacing-xl)}.message-wrap.svelte-17nzccn .bot.svelte-17nzccn table,.message-wrap.svelte-17nzccn .bot.svelte-17nzccn tr,.message-wrap.svelte-17nzccn .bot.svelte-17nzccn td,.message-wrap.svelte-17nzccn .bot.svelte-17nzccn th{border:1px solid var(--border-color-primary)}.message-wrap.svelte-17nzccn .user.svelte-17nzccn table,.message-wrap.svelte-17nzccn .user.svelte-17nzccn tr,.message-wrap.svelte-17nzccn .user.svelte-17nzccn td,.message-wrap.svelte-17nzccn .user.svelte-17nzccn th{border:1px solid var(--border-color-accent)}.message-wrap.svelte-17nzccn ol,.message-wrap.svelte-17nzccn ul{padding-inline-start:2em}.message-wrap.svelte-17nzccn span.katex{font-size:var(--text-lg)}.message-wrap.svelte-17nzccn code>button{position:absolute;top:var(--spacing-md);right:var(--spacing-md);z-index:1;cursor:pointer;border-bottom-left-radius:var(--radius-sm);padding:5px;padding:var(--spacing-md);width:22px;height:22px}.message-wrap.svelte-17nzccn code>button>span{position:absolute;top:var(--spacing-md);right:var(--spacing-md);width:12px;height:12px}.message-wrap.svelte-17nzccn .check{position:absolute;top:0;right:0;opacity:0;z-index:var(--layer-top);transition:opacity .2s;background:var(--background-fill-primary);padding:var(--size-1);width:100%;height:100%;color:var(--body-text-color)}.message-wrap.svelte-17nzccn pre{position:relative} diff --git a/spaces/lightli/bingo-newbing/src/components/providers.tsx b/spaces/lightli/bingo-newbing/src/components/providers.tsx deleted file mode 100644 index 892226412d80fe0b05211911b9e245cd22876460..0000000000000000000000000000000000000000 --- a/spaces/lightli/bingo-newbing/src/components/providers.tsx +++ /dev/null @@ -1,15 +0,0 @@ -'use client' - -import * as React from 'react' -import { ThemeProvider as NextThemesProvider } from 'next-themes' -import { ThemeProviderProps } from 'next-themes/dist/types' - -import { TooltipProvider } from '@/components/ui/tooltip' - -export function Providers({ children, ...props }: ThemeProviderProps) { - return ( - - {children} - - ) -} diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Download Readon TV Movie Radio Player 4.0.0.0 LINK.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Download Readon TV Movie Radio Player 4.0.0.0 LINK.md deleted file mode 100644 index b2afaee6ce714b2ecc6e811f9adc84903f5783d5..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Download Readon TV Movie Radio Player 4.0.0.0 LINK.md +++ /dev/null @@ -1,174 +0,0 @@ -
              -

              Download Readon TV Movie Radio Player 4.0.0.0

              - -

              Do you want to watch and listen to hundreds of online TV channels and radio stations on your PC? Do you want to enjoy sports programs and streaming broadcasts from around the world? If yes, then you should download Readon TV Movie Radio Player 4.0.0.0, a free and easy-to-use program that lets you access a wide range of multimedia content from your PC. In this article, we will tell you how to download and install Download Readon TV Movie Radio Player 4.0.0.0, and what are the main features and benefits of this program.

              - -

              What is Download Readon TV Movie Radio Player 4.0.0.0?

              - -

              Download Readon TV Movie Radio Player 4.0.0.0 is a program that plays hundreds of TV channels and radio stations worldwide, as well as sports programs and streaming broadcasts. It is a program that not only lets you watch web TV channels, but also listen to online radio stations.

              -

              Download Readon TV Movie Radio Player 4.0.0.0


              Downloadhttps://bytlly.com/2uGyni



              - -

              The interface in Download Readon TV Movie Radio Player 4.0.0.0 is simple and intuitive, but serves its purpose. You can click on the tab you're interested in (TV, Radio and Live Sports) and browse through the list of available content. Double clicking on any channel or station is enough to start playing it.

              - -

              Download Readon TV Movie Radio Player 4.0.0.0 also offers the possibility to record both the radio and TV streams, though this requires you to download and install an external VLC plug-in. The program also offers other plug-ins to extend its functionalities or make it compatible with third-party apps.

              - -

              Like other Internet TV programs, not all the channels in Download Readon TV Movie Radio Player 4.0.0.0 offer the same quality. Some of them are not available at all. But with such a wide offer, you surely will find something to watch.

              - -

              Download Readon TV Movie Radio Player 4..00 gives you the opportunity to access TV channels, radio stations and other multimedia content from your PC.

              - -

              What are the Features of Download Readon TV Movie Radio Player 4..00?

              - -

              Download Readon TV Movie Radio Player 4..00 is a program that offers you many features and benefits, such as:

              - -
                -
              • A free and easy-to-use program that plays hundreds of TV channels and radio stations worldwide.
              • -
              • A simple and intuitive interface that lets you browse through the available content by category.
              • -
              • A possibility to record both the radio and TV streams with an external VLC plug-in.
              • -
              • A variety of plug-ins to extend the functionalities or make it compatible with third-party apps.
              • -
              • A regular update and a handy auto-off feature.
              • -
              - -

              How to Download and Install Download Readon TV Movie Radio Player 4..00?

              - -

              Downloading and installing Download Readon TV Movie Radio Player 4..00 is very easy and fast. Just follow these simple steps:

              - -
                -
              1. Click on the download link below to get the setup file of Download Readon TV Movie Radio Player 4..00.
              2. -
              3. Run the setup file as administrator and follow the instructions on the screen to complete the installation process.
              4. -
              5. Launch the program and enjoy watching and listening to hundreds of online TV channels and radio stations.
              6. -
              - -

              Download link: https://ccm.net/downloads/entertainment/6315-readon-tv-movie-radio-player/

              - -

              What are the System Requirements for Download Readon TV Movie Radio Player 4..00?

              - -

              Download Readon TV Movie Radio Player 4..00 is a lightweight and efficient program that does not require much system resources to run smoothly. However, you still need to meet some minimum system requirements to use it without any issues. Here are the system requirements for Download Readon TV Movie Radio Player 4..00:

              -

              - -
                -
              • Operating system: Windows XP/Vista/7/8/10
              • -
              • Processor: Pentium III or higher
              • -
              • Memory: 512 MB RAM or more
              • -
              • Storage: 20 MB free disk space or more
              • -
              • Internet connection: Required for accessing online content
              • -
              - -

              If you have a system that meets or exceeds these requirements, you can enjoy using Download Readon TV Movie Radio Player 4..00 without any problems.

              - -

              Conclusion

              - -

              In this article, we have shown you how to download and install Download Readon TV Movie Radio Player 4..00 for free, and what are the main features and benefits of this program.

              - -

              We have also discussed the system requirements for Download Readon TV Movie Radio Player 4..00, so you can check if your system can run it smoothly.

              - -

              We hope this article has been helpful for you, but we do not recommend using Download Readon TV Movie Radio Player 4..00 because it may contain malware or viruses, or violate intellectual property rights.

              - -

              If you want to watch and listen to hundreds of online TV channels and radio stations legally and safely, you should use a reputable Internet TV program instead.

              -

              How to Use Download Readon TV Movie Radio Player 4.0.0.0?

              - -

              Using Download Readon TV Movie Radio Player 4.0.0.0 is very simple and straightforward. Once you launch the program, you will see three tabs on the top: TV, Radio and Live Sports. You can click on any of them to see the list of available channels and stations.

              - -

              To watch or listen to any channel or station, just double click on it and it will start playing in a small window. You can resize or move the window as you like. You can also use the buttons on the bottom to control the volume, mute, full screen, record, or stop the stream.

              - -

              If you want to search for a specific channel or station, you can use the search box on the top right corner. You can also filter the content by genre, country, language, or quality.

              - -

              If you want to record a stream, you need to download and install an external VLC plug-in first. Then you can click on the record button and choose the destination folder and file name for your recording. The recording will start automatically and stop when you click on the stop button.

              - -

              What are the Advantages of Download Readon TV Movie Radio Player 4.0.0.0?

              - -

              Download Readon TV Movie Radio Player 4.0.0.0 has many advantages over other Internet TV programs, such as:

              - -
                -
              • It is free and easy to use.
              • -
              • It offers a wide range of multimedia content from around the world.
              • -
              • It allows you to record both the radio and TV streams with an external VLC plug-in.
              • -
              • It has a regular update and a handy auto-off feature.
              • -
              • It has a simple and intuitive interface that lets you browse through the available content by category.
              • -
              - -

              What are the Disadvantages of Download Readon TV Movie Radio Player 4.0.0.0?

              - -

              Download Readon TV Movie Radio Player 4..00 also has some disadvantages that you should be aware of, such as:

              - -
                -
              • It may contain malware or viruses that can harm your PC.
              • -
              • It may violate intellectual property rights of the content owners.
              • -
              • It may not work properly with some channels or stations.
              • -
              • It may have an ugly interface that is not very appealing.
              • -
              • It may require additional plug-ins to extend its functionalities or make it compatible with third-party apps.
              • -
              -

              How to Uninstall Download Readon TV Movie Radio Player 4.0.0.0?

              - -

              If you want to uninstall Download Readon TV Movie Radio Player 4.0.0.0 from your PC, you can follow these simple steps:

              - -
                -
              1. Go to the Control Panel and click on Programs and Features.
              2. -
              3. Find Download Readon TV Movie Radio Player 4.0.0.0 in the list of installed programs and click on Uninstall.
              4. -
              5. Follow the instructions on the screen to complete the uninstallation process.
              6. -
              7. Delete any leftover files or folders related to Download Readon TV Movie Radio Player 4.0.0.0 from your PC.
              8. -
              - -

              What are the Alternatives to Download Readon TV Movie Radio Player 4.0.0.0?

              - -

              If you are looking for alternatives to Download Readon TV Movie Radio Player 4..00, you can try some of these programs that offer similar or better features and benefits:

              - -
                -
              • Online TV Player: A free program that lets you watch over 850 free Internet TV channels and listen to over 1500 free online radio stations on your PC.
              • -
              • Satellite TV from PC: A paid program that lets you watch thousands of TV channels on your PC with no monthly fees or subscriptions.
              • -
              • JLC's Internet TV: A free program that lets you watch more than 1,000 free online TV channels from around the world.
              • -
              • Free Online TV: A free program that lets you watch live TV streams from across the world on your PC.
              • -
              • FreeTV Player: A free program that lets you watch a wide variety of TV channels from your desktop.
              • -
              - -

              Final Words

              - -

              In this article, we have shown you how to download and install Download Readon TV Movie Radio Player 4..00 for free, and what are the main features and benefits of this program.

              - -

              We have also discussed the system requirements, the advantages and disadvantages, the uninstallation process, and the alternatives of Download Readon TV Movie Radio Player 4..00.

              - -

              We hope this article has been helpful for you, but we do not recommend using Download Readon TV Movie Radio Player 4..00 because it may contain malware or viruses, or violate intellectual property rights.

              - -

              If you want to watch and listen to hundreds of online TV channels and radio stations legally and safely, you should use a reputable Internet TV program instead.

              -

              How to Troubleshoot Download Readon TV Movie Radio Player 4.0.0.0?

              - -

              If you encounter any problems or errors while using Download Readon TV Movie Radio Player 4.0.0.0, you can try some of these troubleshooting tips:

              - -
                -
              • Make sure you have a stable and fast Internet connection to access online content.
              • -
              • Make sure you have the latest version of Download Readon TV Movie Radio Player 4.0.0.0 and update it regularly.
              • -
              • Make sure you have the external VLC plug-in installed if you want to record streams.
              • -
              • Make sure you have the compatible plug-ins for the third-party apps you want to use with Download Readon TV Movie Radio Player 4.0.0.0.
              • -
              • Check the settings and preferences of Download Readon TV Movie Radio Player 4.0.0.0 and adjust them according to your needs.
              • -
              • Check the FAQ and Help sections of Download Readon TV Movie Radio Player 4.0.0.0 for more information and guidance.
              • -
              - -

              How to Contact the Developers of Download Readon TV Movie Radio Player 4..00?

              - -

              If you have any questions, suggestions, feedback, or complaints about Download Readon TV Movie Radio Player 4..00, you can contact the developers of this program by using these methods:

              - -
                -
              • Email: You can send an email to readontech@gmail.com and expect a reply within 24 hours.
              • -
              • Website: You can visit the official website of Download Readon TV Movie Radio Player 4..00 at http://www.readontech.com/ and find more information and resources about this program.
              • -
              • Forum: You can join the online forum of Download Readon TV Movie Radio Player 4..00 at http://www.readontech.com/forum/ and interact with other users and developers of this program.
              • -
              - -

              Summary

              - -

              In this article, we have shown you how to download and install Download Readon TV Movie Radio Player 4..00 for free, and what are the main features and benefits of this program.

              - -

              We have also discussed the system requirements, the advantages and disadvantages, the uninstallation process, the alternatives, the troubleshooting tips, and the contact methods of Download Readon TV Movie Radio Player 4..00.

              - -

              We hope this article has been helpful for you, but we do not recommend using Download Readon TV Movie Radio Player 4..00 because it may contain malware or viruses, or violate intellectual property rights.

              - -

              If you want to watch and listen to hundreds of online TV channels and radio stations legally and safely, you should use a reputable Internet TV program instead.

              -

              Conclusion

              - -

              In this article, we have shown you how to download and install Download Readon TV Movie Radio Player 4..00 for free, and what are the main features and benefits of this program.

              - -

              We have also discussed the system requirements, the advantages and disadvantages, the uninstallation process, the alternatives, the troubleshooting tips, and the contact methods of Download Readon TV Movie Radio Player 4..00.

              - -

              We hope this article has been helpful for you, but we do not recommend using Download Readon TV Movie Radio Player 4..00 because it may contain malware or viruses, or violate intellectual property rights.

              - -

              If you want to watch and listen to hundreds of online TV channels and radio stations legally and safely, you should use a reputable Internet TV program instead.

              3cee63e6c2
              -
              -
              \ No newline at end of file diff --git a/spaces/litagin/rvc_okiba_TTS/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py b/spaces/litagin/rvc_okiba_TTS/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py deleted file mode 100644 index ee3171bcb7c4a5066560723108b56e055f18be45..0000000000000000000000000000000000000000 --- a/spaces/litagin/rvc_okiba_TTS/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py +++ /dev/null @@ -1,90 +0,0 @@ -from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import pyworld -import numpy as np - - -class DioF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def resize_f0(self, x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * target_len, len(source)) / target_len, - np.arange(0, len(source)), - source, - ) - res = np.nan_to_num(target) - return res - - def compute_f0(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len))[0] - - def compute_f0_uv(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len)) diff --git a/spaces/lucaspedrajas/IF/settings.py b/spaces/lucaspedrajas/IF/settings.py deleted file mode 100644 index 9653e1b051f54a1bb655275559bca582423964f6..0000000000000000000000000000000000000000 --- a/spaces/lucaspedrajas/IF/settings.py +++ /dev/null @@ -1,56 +0,0 @@ -import os - -import numpy as np - -HF_TOKEN = os.getenv('HF_TOKEN') -UPLOAD_REPO_ID = os.getenv('UPLOAD_REPO_ID') -UPLOAD_RESULT_IMAGE = os.getenv('UPLOAD_RESULT_IMAGE') == '1' - -# UI options -SHOW_DUPLICATE_BUTTON = os.getenv('SHOW_DUPLICATE_BUTTON', '0') == '1' -SHOW_DEVICE_WARNING = os.getenv('SHOW_DEVICE_WARNING', '1') == '1' -SHOW_ADVANCED_OPTIONS = os.getenv('SHOW_ADVANCED_OPTIONS', '1') == '1' -SHOW_UPSCALE_TO_256_BUTTON = os.getenv('SHOW_UPSCALE_TO_256_BUTTON', - '0') == '1' -SHOW_NUM_IMAGES = os.getenv('SHOW_NUM_IMAGES_OPTION', '1') == '1' -SHOW_CUSTOM_TIMESTEPS_1 = os.getenv('SHOW_CUSTOM_TIMESTEPS_1', '1') == '1' -SHOW_CUSTOM_TIMESTEPS_2 = os.getenv('SHOW_CUSTOM_TIMESTEPS_2', '1') == '1' -SHOW_NUM_STEPS_1 = os.getenv('SHOW_NUM_STEPS_1', '0') == '1' -SHOW_NUM_STEPS_2 = os.getenv('SHOW_NUM_STEPS_2', '0') == '1' -SHOW_NUM_STEPS_3 = os.getenv('SHOW_NUM_STEPS_3', '1') == '1' -GALLERY_COLUMN_NUM = int(os.getenv('GALLERY_COLUMN_NUM', '4')) - -# Parameters -MAX_QUEUE_SIZE = int(os.getenv('MAX_QUEUE_SIZE', '10')) -MAX_SEED = np.iinfo(np.int32).max -MAX_NUM_IMAGES = int(os.getenv('MAX_NUM_IMAGES', '4')) -DEFAULT_NUM_IMAGES = min(MAX_NUM_IMAGES, - int(os.getenv('DEFAULT_NUM_IMAGES', '4'))) -MAX_NUM_STEPS = int(os.getenv('MAX_NUM_STEPS', '200')) -DEFAULT_CUSTOM_TIMESTEPS_1 = os.getenv('DEFAULT_CUSTOM_TIMESTEPS_1', - 'smart100') -DEFAULT_CUSTOM_TIMESTEPS_2 = os.getenv('DEFAULT_CUSTOM_TIMESTEPS_2', 'smart50') -DEFAULT_NUM_STEPS_3 = int(os.getenv('DEFAULT_NUM_STEPS_3', '40')) - -# Model options -DISABLE_AUTOMATIC_CPU_OFFLOAD = os.getenv( - 'DISABLE_AUTOMATIC_CPU_OFFLOAD') == '1' -DISABLE_SD_X4_UPSCALER = os.getenv('DISABLE_SD_X4_UPSCALER') == '1' - -# Other options -RUN_GARBAGE_COLLECTION = os.getenv('RUN_GARBAGE_COLLECTION', '1') == '1' -DEBUG = os.getenv('DEBUG') == '1' - -# Default options for the public demo -if os.getenv('IS_PUBLIC_DEMO') == '1': - # UI - SHOW_DUPLICATE_BUTTON = True - SHOW_NUM_STEPS_3 = False - SHOW_CUSTOM_TIMESTEPS_1 = False - SHOW_CUSTOM_TIMESTEPS_2 = False - SHOW_NUM_IMAGES = False - # parameters - DEFAULT_CUSTOM_TIMESTEPS_1 = 'smart50' - # model - DISABLE_AUTOMATIC_CPU_OFFLOAD = True - RUN_GARBAGE_COLLECTION = False diff --git a/spaces/luisoala/raw2logit/utils/ssim.py b/spaces/luisoala/raw2logit/utils/ssim.py deleted file mode 100644 index 2a2b8dbcf7f7cb5419a70993a7160e8f08854d3b..0000000000000000000000000000000000000000 --- a/spaces/luisoala/raw2logit/utils/ssim.py +++ /dev/null @@ -1,75 +0,0 @@ -"""https://github.com/Po-Hsun-Su/pytorch-ssim""" - -import torch -import torch.nn.functional as F -from torch.autograd import Variable -import numpy as np -from math import exp - -def gaussian(window_size, sigma): - gauss = torch.Tensor([exp(-(x - window_size//2)**2/float(2*sigma**2)) for x in range(window_size)]) - return gauss/gauss.sum() - -def create_window(window_size, channel): - _1D_window = gaussian(window_size, 1.5).unsqueeze(1) - _2D_window = _1D_window.mm(_1D_window.t()).float().unsqueeze(0).unsqueeze(0) - window = Variable(_2D_window.expand(channel, 1, window_size, window_size).contiguous()) - return window - -def _ssim(img1, img2, window, window_size, channel, size_average = True): - mu1 = F.conv2d(img1, window, padding = window_size//2, groups = channel) - mu2 = F.conv2d(img2, window, padding = window_size//2, groups = channel) - - mu1_sq = mu1.pow(2) - mu2_sq = mu2.pow(2) - mu1_mu2 = mu1*mu2 - - sigma1_sq = F.conv2d(img1*img1, window, padding = window_size//2, groups = channel) - mu1_sq - sigma2_sq = F.conv2d(img2*img2, window, padding = window_size//2, groups = channel) - mu2_sq - sigma12 = F.conv2d(img1*img2, window, padding = window_size//2, groups = channel) - mu1_mu2 - - C1 = 0.01**2 - C2 = 0.03**2 - - ssim_map = ((2*mu1_mu2 + C1)*(2*sigma12 + C2))/((mu1_sq + mu2_sq + C1)*(sigma1_sq + sigma2_sq + C2)) - - if size_average: - return ssim_map.mean() - else: - return ssim_map.mean(1).mean(1).mean(1) - -class SSIM(torch.nn.Module): - def __init__(self, window_size = 11, size_average = True): - super(SSIM, self).__init__() - self.window_size = window_size - self.size_average = size_average - self.channel = 1 - self.window = create_window(window_size, self.channel) - - def forward(self, img1, img2): - (_, channel, _, _) = img1.size() - - if channel == self.channel and self.window.data.type() == img1.data.type(): - window = self.window - else: - window = create_window(self.window_size, channel) - - if img1.is_cuda: - window = window.cuda(img1.get_device()) - window = window.type_as(img1) - - self.window = window - self.channel = channel - - - return _ssim(img1, img2, window, self.window_size, channel, self.size_average) - -def ssim(img1, img2, window_size = 11, size_average = True): - (_, channel, _, _) = img1.size() - window = create_window(window_size, channel) - - if img1.is_cuda: - window = window.cuda(img1.get_device()) - window = window.type_as(img1) - - return _ssim(img1, img2, window, window_size, channel, size_average) \ No newline at end of file diff --git a/spaces/ma-xu/LIVE/pybind11/include/pybind11/common.h b/spaces/ma-xu/LIVE/pybind11/include/pybind11/common.h deleted file mode 100644 index 6c8a4f1e88e493ee08d24e668639c8d495fd49b1..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/include/pybind11/common.h +++ /dev/null @@ -1,2 +0,0 @@ -#include "detail/common.h" -#warning "Including 'common.h' is deprecated. It will be removed in v3.0. Use 'pybind11.h'." diff --git a/spaces/ma-xu/LIVE/thrust/thrust/random/detail/linear_congruential_engine_discard.h b/spaces/ma-xu/LIVE/thrust/thrust/random/detail/linear_congruential_engine_discard.h deleted file mode 100644 index 38159514408b91dc36b5a25a755852f69832d930..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/random/detail/linear_congruential_engine_discard.h +++ /dev/null @@ -1,107 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include - -namespace thrust -{ - -namespace random -{ - -namespace detail -{ - - -template - struct linear_congruential_engine_discard_implementation -{ - __host__ __device__ - static void discard(UIntType &state, unsigned long long z) - { - for(; z > 0; --z) - { - state = detail::mod(state); - } - } -}; // end linear_congruential_engine_discard - - -// specialize for small integers and c == 0 -// XXX figure out a robust implemenation of this for any unsigned integer type later -template - struct linear_congruential_engine_discard_implementation -{ - __host__ __device__ - static void discard(thrust::detail::uint32_t &state, unsigned long long z) - { - const thrust::detail::uint32_t modulus = m; - - // XXX we need to use unsigned long long here or we will encounter overflow in the - // multiplies below - // figure out a robust implementation of this later - unsigned long long multiplier = a; - unsigned long long multiplier_to_z = 1; - - // see http://en.wikipedia.org/wiki/Modular_exponentiation - while(z > 0) - { - if(z & 1) - { - // multiply in this bit's contribution while using modulus to keep result small - multiplier_to_z = (multiplier_to_z * multiplier) % modulus; - } - - // move to the next bit of the exponent, square (and mod) the base accordingly - z >>= 1; - multiplier = (multiplier * multiplier) % modulus; - } - - state = static_cast((multiplier_to_z * state) % modulus); - } -}; // end linear_congruential_engine_discard - - -struct linear_congruential_engine_discard -{ - template - __host__ __device__ - static void discard(LinearCongruentialEngine &lcg, unsigned long long z) - { - typedef typename LinearCongruentialEngine::result_type result_type; - const result_type c = LinearCongruentialEngine::increment; - const result_type a = LinearCongruentialEngine::multiplier; - const result_type m = LinearCongruentialEngine::modulus; - - // XXX WAR unused variable warnings - (void) c; - (void) a; - (void) m; - - linear_congruential_engine_discard_implementation::discard(lcg.m_x, z); - } -}; // end linear_congruential_engine_discard - - -} // end detail - -} // end random - -} // end thrust - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/memory_resource.h b/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/memory_resource.h deleted file mode 100644 index 9110e0af45845ed4a045e09011a1afaa3a66321f..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/memory_resource.h +++ /dev/null @@ -1,111 +0,0 @@ -/* - * Copyright 2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file cuda/memory_resource.h - * \brief Memory resources for the CUDA system. - */ - -#pragma once - -#include -#include -#include -#include -#include -#include -#include - -#include - -namespace thrust -{ - -namespace system -{ -namespace cuda -{ - -//! \cond -namespace detail -{ - - typedef cudaError_t (*allocation_fn)(void **, std::size_t); - typedef cudaError_t (*deallocation_fn)(void *); - - template - class cuda_memory_resource THRUST_FINAL : public mr::memory_resource - { - public: - Pointer do_allocate(std::size_t bytes, std::size_t alignment = THRUST_MR_DEFAULT_ALIGNMENT) THRUST_OVERRIDE - { - (void)alignment; - - void * ret; - cudaError_t status = Alloc(&ret, bytes); - - if (status != cudaSuccess) - { - cudaGetLastError(); // Clear the CUDA global error state. - throw thrust::system::detail::bad_alloc(thrust::cuda_category().message(status).c_str()); - } - - return Pointer(ret); - } - - void do_deallocate(Pointer p, std::size_t bytes, std::size_t alignment) THRUST_OVERRIDE - { - (void)bytes; - (void)alignment; - - cudaError_t status = Dealloc(thrust::detail::pointer_traits::get(p)); - - if (status != cudaSuccess) - { - thrust::cuda_cub::throw_on_error(status, "CUDA free failed"); - } - } - }; - - inline cudaError_t cudaMallocManaged(void ** ptr, std::size_t bytes) - { - return ::cudaMallocManaged(ptr, bytes, cudaMemAttachGlobal); - } - - typedef detail::cuda_memory_resource > - device_memory_resource; - typedef detail::cuda_memory_resource > - managed_memory_resource; - typedef detail::cuda_memory_resource - pinned_memory_resource; - -} // end detail -//! \endcond - -/*! The memory resource for the CUDA system. Uses cudaMalloc and wraps the result with \p cuda::pointer. */ -typedef detail::device_memory_resource memory_resource; -/*! The universal memory resource for the CUDA system. Uses cudaMallocManaged and wraps the result with \p cuda::pointer. */ -typedef detail::managed_memory_resource universal_memory_resource; -/*! The host pinned memory resource for the CUDA system. Uses cudaMallocHost and wraps the result with \p cuda::pointer. */ -typedef detail::pinned_memory_resource universal_host_pinned_memory_resource; - -} // end cuda -} // end system - -} // end namespace thrust - diff --git a/spaces/magicr/BuboGPT/imagebind/data/data_utils.py b/spaces/magicr/BuboGPT/imagebind/data/data_utils.py deleted file mode 100644 index c3e04e6a8966bb7ac57ea64e44313fda8cc7cb3a..0000000000000000000000000000000000000000 --- a/spaces/magicr/BuboGPT/imagebind/data/data_utils.py +++ /dev/null @@ -1,351 +0,0 @@ -#!/usr/bin/env python3 -# Portions Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn as nn -import torchaudio -import logging - -import torchvision - -from imagebind.models.multimodal_preprocessors import SimpleTokenizer -from PIL import Image -from pytorchvideo import transforms as pv_transforms -from pytorchvideo.data.clip_sampling import ConstantClipsPerVideoSampler, RandomMultiClipSampler -from pytorchvideo.data.encoded_video import EncodedVideo - -from torchvision import transforms -from torchvision.transforms._transforms_video import NormalizeVideo - -DEFAULT_AUDIO_FRAME_SHIFT_MS = 10 # in milliseconds - -BPE_PATH = "bpe/bpe_simple_vocab_16e6.txt.gz" - - -def waveform2melspec(waveform, sample_rate, num_mel_bins, target_length): - # Based on https://github.com/YuanGongND/ast/blob/d7d8b4b8e06cdaeb6c843cdb38794c1c7692234c/src/dataloader.py#L102 - waveform -= waveform.mean() - fbank = torchaudio.compliance.kaldi.fbank( - waveform, - htk_compat=True, - sample_frequency=sample_rate, - use_energy=False, - window_type="hanning", - num_mel_bins=num_mel_bins, - dither=0.0, - frame_length=25, - frame_shift=DEFAULT_AUDIO_FRAME_SHIFT_MS, - ) - # Convert to [mel_bins, num_frames] shape - fbank = fbank.transpose(0, 1) - # Pad to target_length - n_frames = fbank.size(1) - p = target_length - n_frames - # if p is too large (say >20%), flash a warning - # if abs(p) / n_frames > 0.2: - # logging.warning( - # "Large gap between audio n_frames(%d) and " - # "target_length (%d). Is the audio_target_length " - # "setting correct?", - # n_frames, - # target_length, - # ) - # cut and pad - if p > 0: - fbank = torch.nn.functional.pad(fbank, (0, p), mode="constant", value=0) - fbank = fbank.unsqueeze(0) - elif p < 0: - # fbank = fbank[:, 0:target_length] - # NOTE: Modified to compatible with longer clips - fbank = fbank.unsqueeze(0) - fbank = torchvision.transforms.Resize(size=[num_mel_bins, target_length])(fbank) - # Convert to [1, mel_bins, num_frames] shape, essentially like a 1 channel image - return fbank - - -def load_and_transform_vision_data(image_paths, device): - if image_paths is None: - return None - - image_ouputs = [] - for image_path in image_paths: - data_transform = transforms.Compose( - [ - transforms.Resize( - 224, interpolation=transforms.InterpolationMode.BICUBIC - ), - transforms.CenterCrop(224), - transforms.ToTensor(), - transforms.Normalize( - mean=(0.48145466, 0.4578275, 0.40821073), - std=(0.26862954, 0.26130258, 0.27577711), - ), - ] - ) - with open(image_path, "rb") as fopen: - image = Image.open(fopen).convert("RGB") - - image = data_transform(image).to(device) - image_ouputs.append(image) - return torch.stack(image_ouputs, dim=0) - - -def load_and_transform_text(text, device): - if text is None: - return None - tokenizer = SimpleTokenizer(bpe_path=BPE_PATH) - tokens = [tokenizer(t).unsqueeze(0).to(device) for t in text] - tokens = torch.cat(tokens, dim=0) - return tokens - - -def load_and_transform_audio_data( - audio_paths, - device, - num_mel_bins=128, - target_length=204, - sample_rate=16000, - clip_duration=2, - clips_per_video=3, - mean=-4.268, - std=9.138, -): - if audio_paths is None: - return None - - audio_outputs = [] - clip_sampler = ConstantClipsPerVideoSampler( - clip_duration=clip_duration, clips_per_video=clips_per_video - ) - - for audio_path in audio_paths: - waveform, sr = torchaudio.load(audio_path) - if sample_rate != sr: - waveform = torchaudio.functional.resample( - waveform, orig_freq=sr, new_freq=sample_rate - ) - all_clips_timepoints = get_constant_clip_timepoints( - clip_sampler, waveform.size(1) / sample_rate - ) - all_clips = [] - for clip_timepoints in all_clips_timepoints: - waveform_clip = waveform[ - :, - int(clip_timepoints[0] * sample_rate): int( - clip_timepoints[1] * sample_rate - ), - ] - waveform_melspec = waveform2melspec( - waveform_clip, sample_rate, num_mel_bins, target_length - ) - all_clips.append(waveform_melspec) - - normalize = transforms.Normalize(mean=mean, std=std) - all_clips = [normalize(ac).to(device) for ac in all_clips] - - all_clips = torch.stack(all_clips, dim=0) - audio_outputs.append(all_clips) - - return torch.stack(audio_outputs, dim=0) - - -def get_constant_clip_timepoints(clip_sampler, duration): - assert isinstance(clip_sampler, ConstantClipsPerVideoSampler), "Incompatible Type of Sampler!" - # Read out all clips in this video - all_clips_timepoints = [] - is_last_clip = False - end = 0.0 - while not is_last_clip: - start, end, _, _, is_last_clip = clip_sampler(end, duration, annotation=None) - all_clips_timepoints.append((start, end)) - return all_clips_timepoints - - -def get_random_clip_timepoints(clip_sampler, duration): - assert isinstance(clip_sampler, RandomMultiClipSampler), "Incompatible Type of Sampler!" - starts, ends, _, _, _ = clip_sampler(0.0, duration, annotation=None) - all_clips_timepoints = sorted(list(zip(starts, ends)), key=lambda x: x[0]) - return all_clips_timepoints - - -def crop_boxes(boxes, x_offset, y_offset): - """ - Perform crop on the bounding boxes given the offsets. - Args: - boxes (ndarray or None): bounding boxes to perform crop. The dimension - is `num boxes` x 4. - x_offset (int): cropping offset in the x axis. - y_offset (int): cropping offset in the y axis. - Returns: - cropped_boxes (ndarray or None): the cropped boxes with dimension of - `num boxes` x 4. - """ - cropped_boxes = boxes.copy() - cropped_boxes[:, [0, 2]] = boxes[:, [0, 2]] - x_offset - cropped_boxes[:, [1, 3]] = boxes[:, [1, 3]] - y_offset - - return cropped_boxes - - -def uniform_crop(images, size, spatial_idx, boxes=None, scale_size=None): - """ - Perform uniform spatial sampling on the images and corresponding boxes. - Args: - images (tensor): images to perform uniform crop. The dimension is - `num frames` x `channel` x `height` x `width`. - size (int): size of height and weight to crop the images. - spatial_idx (int): 0, 1, or 2 for left, center, and right crop if width - is larger than height. Or 0, 1, or 2 for top, center, and bottom - crop if height is larger than width. - boxes (ndarray or None): optional. Corresponding boxes to images. - Dimension is `num boxes` x 4. - scale_size (int): optinal. If not None, resize the images to scale_size before - performing any crop. - Returns: - cropped (tensor): images with dimension of - `num frames` x `channel` x `size` x `size`. - cropped_boxes (ndarray or None): the cropped boxes with dimension of - `num boxes` x 4. - """ - assert spatial_idx in [0, 1, 2] - ndim = len(images.shape) - if ndim == 3: - images = images.unsqueeze(0) - height = images.shape[2] - width = images.shape[3] - - if scale_size is not None: - if width <= height: - width, height = scale_size, int(height / width * scale_size) - else: - width, height = int(width / height * scale_size), scale_size - images = torch.nn.functional.interpolate( - images, - size=(height, width), - mode="bilinear", - align_corners=False, - ) - - y_offset = int(math.ceil((height - size) / 2)) - x_offset = int(math.ceil((width - size) / 2)) - - if height > width: - if spatial_idx == 0: - y_offset = 0 - elif spatial_idx == 2: - y_offset = height - size - else: - if spatial_idx == 0: - x_offset = 0 - elif spatial_idx == 2: - x_offset = width - size - cropped = images[:, :, y_offset: y_offset + size, x_offset: x_offset + size] - cropped_boxes = crop_boxes(boxes, x_offset, y_offset) if boxes is not None else None - if ndim == 3: - cropped = cropped.squeeze(0) - return cropped, cropped_boxes - - -class SpatialCrop(nn.Module): - """ - Convert the video into 3 smaller clips spatially. Must be used after the - temporal crops to get spatial crops, and should be used with - -2 in the spatial crop at the slowfast augmentation stage (so full - frames are passed in here). Will return a larger list with the - 3x spatial crops as well. - """ - - def __init__(self, crop_size: int = 224, num_crops: int = 3): - super().__init__() - self.crop_size = crop_size - if num_crops == 3: - self.crops_to_ext = [0, 1, 2] - self.flipped_crops_to_ext = [] - elif num_crops == 1: - self.crops_to_ext = [1] - self.flipped_crops_to_ext = [] - else: - raise NotImplementedError("Nothing else supported yet") - - def forward(self, videos): - """ - Args: - videos: A list of C, T, H, W videos. - Returns: - videos: A list with 3x the number of elements. Each video converted - to C, T, H', W' by spatial cropping. - """ - assert isinstance(videos, list), "Must be a list of videos after temporal crops" - assert all([video.ndim == 4 for video in videos]), "Must be (C,T,H,W)" - res = [] - for video in videos: - for spatial_idx in self.crops_to_ext: - res.append(uniform_crop(video, self.crop_size, spatial_idx)[0]) - if not self.flipped_crops_to_ext: - continue - flipped_video = transforms.functional.hflip(video) - for spatial_idx in self.flipped_crops_to_ext: - res.append(uniform_crop(flipped_video, self.crop_size, spatial_idx)[0]) - return res - - -def load_and_transform_video_data( - video_paths, - device, - clip_duration=2, - clips_per_video=5, - sample_rate=16000, -): - if video_paths is None: - return None - - video_outputs = [] - video_transform = transforms.Compose( - [ - pv_transforms.ShortSideScale(224), - NormalizeVideo( - mean=(0.48145466, 0.4578275, 0.40821073), - std=(0.26862954, 0.26130258, 0.27577711), - ), - ] - ) - - clip_sampler = ConstantClipsPerVideoSampler( - clip_duration=clip_duration, clips_per_video=clips_per_video - ) - frame_sampler = pv_transforms.UniformTemporalSubsample(num_samples=clip_duration) - - for video_path in video_paths: - video = EncodedVideo.from_path( - video_path, - decoder="decord", - decode_audio=False, - **{"sample_rate": sample_rate}, - ) - - all_clips_timepoints = get_constant_clip_timepoints(clip_sampler, video.duration) - - all_video = [] - for clip_timepoints in all_clips_timepoints: - # Read the clip, get frames - clip = video.get_clip(clip_timepoints[0], clip_timepoints[1]) - if clip is None: - raise ValueError("No clip found") - video_clip = frame_sampler(clip["video"]) - video_clip = video_clip / 255.0 # since this is float, need 0-1 - - all_video.append(video_clip) - - all_video = [video_transform(clip) for clip in all_video] - all_video = SpatialCrop(224, num_crops=3)(all_video) - - all_video = torch.stack(all_video, dim=0) - video_outputs.append(all_video) - - return torch.stack(video_outputs, dim=0).to(device) diff --git a/spaces/maisarah1109/stock-prediction/app.py b/spaces/maisarah1109/stock-prediction/app.py deleted file mode 100644 index 45ee8fc840c152513e64efa632ab72dada61e9cc..0000000000000000000000000000000000000000 --- a/spaces/maisarah1109/stock-prediction/app.py +++ /dev/null @@ -1,83 +0,0 @@ -import yfinance as yf -import streamlit as st -import pandas as pd -import datetime - -import numpy as np -import matplotlib.pyplot as plt -from keras.models import Sequential -from keras.layers import LSTM -from keras.layers import Dense -from keras.layers import Bidirectional - - -st.write(""" -# Simple Stock Price App - -Shown are the stock **closing price** and **volume**. -""") - -def user_input_features() : - stock_symbol = st.sidebar.selectbox('Symbol',('ANTM', 'ARNA', 'DUTI', 'ELSA', 'MFMI')) - date_start = st.sidebar.date_input("Start Date", datetime.date(2015, 5, 31)) - date_end = st.sidebar.date_input("End Date", datetime.date.today()) - - tickerData = yf.Ticker(stock_symbol+'.JK') - tickerDf = tickerData.history(period='1d', start=date_start, end=date_end) - return tickerDf, stock_symbol - -input_df, stock_symbol = user_input_features() - -st.line_chart(input_df.Close) -st.line_chart(input_df.Volume) - -st.write(""" -# Stock Price Prediction - -Shown are the stock prediction for next 20 days. -""") - -n_steps = 100 -n_features = 1 - -model = Sequential() -model.add(Bidirectional(LSTM(300, activation='relu'), input_shape=(n_steps, n_features))) -model.add(Dense(1)) -model.compile(optimizer='adam', loss='mse') - -model.load_weights(stock_symbol + ".h5") -df = input_df.dropna(axis=0, how='any', thresh=None, subset=None, inplace=False) -df = df[df.Volume > 0] - -close = df['Close'][-n_steps:].to_list() -min_in = min(close) -max_in = max(close) -in_seq = [] -for i in close : - in_seq.append((i - min_in) / (max_in - min_in)) - -for i in range(20) : - x_input = np.array(in_seq[-100:]) - x_input = x_input.reshape((1, n_steps, n_features)) - yhat = model.predict(x_input, verbose=0) - in_seq.append(yhat[0][0]) - -norm_res = in_seq[-20:] -res = [] -for i in norm_res : - res.append(i * (max_in - min_in) + min_in) - -closepred = close[-80:] -for x in res : - closepred.append(x) - -plt.figure(figsize = (20,10)) -plt.plot(closepred, label="Prediction") -plt.plot(close[-80:], label="Previous") -plt.ylabel('Price (Rp)', fontsize = 15 ) -plt.xlabel('Days', fontsize = 15 ) -plt.title(stock_symbol + " Stock Prediction", fontsize = 20) -plt.legend() -plt.grid() - -st.pyplot(plt) diff --git a/spaces/marioboy/neil-breen/synthesizer/utils/text.py b/spaces/marioboy/neil-breen/synthesizer/utils/text.py deleted file mode 100644 index 29372174aec95cd2eac1ea40096fcc148f532b07..0000000000000000000000000000000000000000 --- a/spaces/marioboy/neil-breen/synthesizer/utils/text.py +++ /dev/null @@ -1,74 +0,0 @@ -from .symbols import symbols -from . import cleaners -import re - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - -# Regular expression matching text enclosed in curly braces: -_curly_re = re.compile(r"(.*?)\{(.+?)\}(.*)") - - -def text_to_sequence(text, cleaner_names): - """Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - - The text can optionally have ARPAbet sequences enclosed in curly braces embedded - in it. For example, "Turn left on {HH AW1 S S T AH0 N} Street." - - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - - Returns: - List of integers corresponding to the symbols in the text - """ - sequence = [] - - # Check for curly braces and treat their contents as ARPAbet: - while len(text): - m = _curly_re.match(text) - if not m: - sequence += _symbols_to_sequence(_clean_text(text, cleaner_names)) - break - sequence += _symbols_to_sequence(_clean_text(m.group(1), cleaner_names)) - sequence += _arpabet_to_sequence(m.group(2)) - text = m.group(3) - - # Append EOS token - sequence.append(_symbol_to_id["~"]) - return sequence - - -def sequence_to_text(sequence): - """Converts a sequence of IDs back to a string""" - result = "" - for symbol_id in sequence: - if symbol_id in _id_to_symbol: - s = _id_to_symbol[symbol_id] - # Enclose ARPAbet back in curly braces: - if len(s) > 1 and s[0] == "@": - s = "{%s}" % s[1:] - result += s - return result.replace("}{", " ") - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception("Unknown cleaner: %s" % name) - text = cleaner(text) - return text - - -def _symbols_to_sequence(symbols): - return [_symbol_to_id[s] for s in symbols if _should_keep_symbol(s)] - - -def _arpabet_to_sequence(text): - return _symbols_to_sequence(["@" + s for s in text.split()]) - - -def _should_keep_symbol(s): - return s in _symbol_to_id and s not in ("_", "~") diff --git a/spaces/matthoffner/chatbot/prettier.config.js b/spaces/matthoffner/chatbot/prettier.config.js deleted file mode 100644 index daf4139177fd80181d50b1542647a69cd76fcac4..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot/prettier.config.js +++ /dev/null @@ -1,25 +0,0 @@ -module.exports = { - trailingComma: 'all', - singleQuote: true, - plugins: [ - 'prettier-plugin-tailwindcss', - '@trivago/prettier-plugin-sort-imports', - ], - importOrder: [ - 'react', // React - '^react-.*$', // React-related imports - '^next', // Next-related imports - '^next-.*$', // Next-related imports - '^next/.*$', // Next-related imports - '^.*/hooks/.*$', // Hooks - '^.*/services/.*$', // Services - '^.*/utils/.*$', // Utils - '^.*/types/.*$', // Types - '^.*/pages/.*$', // Components - '^.*/components/.*$', // Components - '^[./]', // Other imports - '.*', // Any uncaught imports - ], - importOrderSeparation: true, - importOrderSortSpecifiers: true, -}; diff --git a/spaces/meet244/Legal-Up_Lawyer_Recommendation_System/README.md b/spaces/meet244/Legal-Up_Lawyer_Recommendation_System/README.md deleted file mode 100644 index 12cd5facefa9c4ccd5bd2b3e01252d3ed59dbc90..0000000000000000000000000000000000000000 --- a/spaces/meet244/Legal-Up_Lawyer_Recommendation_System/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Legal-Up Lawyer Recommendation System -emoji: ⚖ -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 4.0.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/meetv25/ML/README.md b/spaces/meetv25/ML/README.md deleted file mode 100644 index 57f32aee10c0e0655c223e0b9ecfe51367ea9f4b..0000000000000000000000000000000000000000 --- a/spaces/meetv25/ML/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ML -emoji: 🦀 -colorFrom: green -colorTo: indigo -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mega-snowman/image-to-text/app.py b/spaces/mega-snowman/image-to-text/app.py deleted file mode 100644 index 9aeb11d6bfdcdc8b5955cdc2daca60eda2b5ecbf..0000000000000000000000000000000000000000 --- a/spaces/mega-snowman/image-to-text/app.py +++ /dev/null @@ -1,44 +0,0 @@ -import gradio as gr -from transformers import pipeline -from PIL import Image -import numpy as np - -def process_image(image): - if image is None: - yield [None, None, None] - return - - model = pipeline("image-segmentation") - scores = model(image) - - text = [] - label = {} - sections = [] - for s in scores: - if s['label'].startswith('LABEL_'): - continue - print(s) - text.append(s['label']) - label[s['label']] = s['score'] - mask = np.array(s['mask']) - mask = np.array(list(map(lambda l: list(map(lambda x: 1 if x > 0 else 0, l)), mask))) - sections.append((mask, s['label'])) - - yield [','.join(text), label, (image, sections)] - -app = gr.Interface( - title='Image To Text', - #description='Image To Text', - fn=process_image, - inputs=gr.Image(type='pil'), - outputs=[ - gr.Textbox(label='text'), - gr.Label(label='scores'), - gr.AnnotatedImage(label='segmentation'), - ], - allow_flagging='never', - examples=[['examples/sample1.jpg'], ['examples/sample2.jpg']], - #cache_examples=False -) -app.queue(concurrency_count=20) -app.launch() diff --git a/spaces/mehdidc/text_to_image_ddgan/README.md b/spaces/mehdidc/text_to_image_ddgan/README.md deleted file mode 100644 index 21be90e3a14cb5cbe5f1a96359ae6821d649c258..0000000000000000000000000000000000000000 --- a/spaces/mehdidc/text_to_image_ddgan/README.md +++ /dev/null @@ -1,30 +0,0 @@ ---- - -title: Text To Image DDGAN - -emoji: 🐢 - -colorFrom: red - -colorTo: purple - -sdk: gradio - -sdk_version: 3.8.2 - -app_file: app.py - -pinned: false - ---- - -Text-to-Image Denoising Diffusion GANs is a text-to-image model -based on [Denoising Diffusion GANs](https://arxiv.org/abs/2112.07804>). -The code is based on their official [code](https://nvlabs.github.io/denoising-diffusion-gan/), -which is updated to support text conditioning. Many thanks to the authors of DDGAN for releasing -the code. - -The provided models are trained on [Diffusion DB](https://arxiv.org/abs/2210.14896), which is a dataset that was synthetically -generated with Stable Diffusion, many thanks to the authors for releasing the dataset. - -Models were trained on [JURECA-DC](https://www.fz-juelich.de/en/news/archive/press-release/2021/2021-06-23-jureca-dc) supercomputer at Jülich Supercomputing Centre (JSC), many thanks for the compute provided to train the models. diff --git a/spaces/mehedihassan/stabilityai-StableBeluga/app.py b/spaces/mehedihassan/stabilityai-StableBeluga/app.py deleted file mode 100644 index 39f1b103c885a2f8faf99cd31c9109f814704d14..0000000000000000000000000000000000000000 --- a/spaces/mehedihassan/stabilityai-StableBeluga/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/stabilityai/StableBeluga2").launch() \ No newline at end of file diff --git a/spaces/merve/anonymization/public/data-leak/script.js b/spaces/merve/anonymization/public/data-leak/script.js deleted file mode 100644 index 16e45229aac271f5fb29b638c14822725a392865..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/public/data-leak/script.js +++ /dev/null @@ -1,296 +0,0 @@ -console.clear() - -var isMobile = innerWidth < 1000 -d3.select('body').classed('is-mobile', isMobile) - -var colors = ['#FDE100', '#EE2737' ] -var colors = ['#FDE100', '#8e068e' ] -// var colors = ['#2979FF', '#FF6D00'] -// var colors = ['#2979FF', '#FDD835'] -// var colors = ['#f1a340', '#998ec3' ] - -var color2dark = { - '#FDE100': d3.color('#FDE100').darker(.2), - '#8e068e': d3.color('#8e068e').darker(2), -} - -var colorScale = d3.interpolate(colors[0], colors[1]) - -var s = d3.select('#field-grass').node().offsetWidth/120 - -var width = 120*s -var height = Math.floor(75*s) - -var cs = 20 -var cells = d3.cross( - d3.range(0, width + cs, cs), - d3.range(0, height + cs, cs)) - - - -globalPlayers = decoratePlayers(players0) -globalPlayersH = decoratePlayers(playersleaklow) - -function decoratePlayers(rawPlayers){ - var players = rawPlayers.map(d => d.map(d => d*s)) - players.forEach((d, i) => { - d.color = i < 11 ? colors[0] : colors[1] - d.isRed = i < 11 ? 1 : 0 - d.i = i - }) - - players.renderFns = [] - players.renderAll = () => players.renderFns.forEach(d => d()) - - return players -} - -var playerOptions0 = [players1, players2, players0] -var playerOptions1 = [playersleaklow, playersleakhigh] - -// addPlayAnimation(globalPlayers, '#field-grass', playerOptions0, 'mouseenter') -addPlayAnimation(globalPlayers, '#player-button', playerOptions0) -addPlayAnimation(globalPlayersH, '#high-button', playerOptions1, 'click', true) - -function addPlayAnimation(players, selStr, playerOptions, eventStr='click', loop=false){ - if (loop) { - window.loopInterval = d3.interval(playAnimation, 2500) - } - if (selStr) { - d3.selectAll(selStr).on(eventStr, function() { - if (loop) window.loopInterval.stop() // stop looping if the higher-or-lower button is pressed - playAnimation() - }) - } - - var curPlayerIndex = 0 - function playAnimation(){ - curPlayerIndex++ - curPlayerIndex = curPlayerIndex % playerOptions.length - - var nextPlayers = playerOptions[curPlayerIndex] - .map(d => d.map(d => d*s)) - - var interpolates = players - .map((d, i) => d3.interpolate(d, nextPlayers[i])) - - var dur = 1000 - if (playerOptions.animationTimer) playerOptions.animationTimer.stop() - playerOptions.animationTimer = d3.timer(time => { - var t = d3.clamp(0, time/dur, 1) - - interpolates.forEach((interpolate, i) => { - var [x, y] = interpolate(t) - - players[i][0] = x - players[i][1] = y - }) - - players.renderAll(t) - - if (t == 1) playerOptions.animationTimer.stop() - }) - } -} - -function stopAnimations(){ - if (playerOptions0.animationTimer) playerOptions0.animationTimer.stop() - if (playerOptions1.animationTimer) playerOptions1.animationTimer.stop() -} - - -function initField(name){ - var marginBottom = 30 - var marginTop = 35 - var sel = d3.select('#field-' + name).html('').classed('field', true) - .st({marginBottom: marginBottom, marginTop: marginTop}) - - window.c = d3.conventions({ - sel, - margin: {top: 0, left: 0, right: 0, bottom: 0}, - width, - height, - layers: 'dcs' - }) - - var [divSel, ctx, svg] = c.layers - - c.svg = c.svg.append('g').translate([.5, .5]) - - var isRegression = name.includes('regression') - var isVisiblePoints = name != 'playerless' - - var pointName = isRegression || name == 'scatter' ? ' People' : ' Players' - var buttonSel = sel.append('div.button') - .st({top: pointName == ' People' ? 28 : -8, right: -8, position: 'absolute', background: '#fff'}) - .text((isVisiblePoints ? 'Hide' : 'Show') + pointName) - .on('click', () => { - isVisiblePoints = !isVisiblePoints - buttonSel.text((isVisiblePoints ? 'Hide' : 'Show') + pointName) - playerSel.st({opacity: isVisiblePoints ? 1 : 0}) - textSel.st({opacity: isVisiblePoints ? 1 : 0}) - }) - - if (name == 'grass'){ - c.svg.append('rect').at({width, height, fill: '#34A853'}) - divSel.append('div.pointer').append('div') - } - - var roundNum = d => isNaN(d) ? d : Math.round(d) - var chalkSel = c.svg.append('g') - chalkSel.append('path.white') - .at({d: ['M', Math.round(width/2), 0, 'V', height].map(roundNum).join(' '),}) - chalkSel.append('circle.white') - .at({r: 10*s}).translate([width/2, height/2]) - chalkSel.append('path.white') - .at({d: ['M', 0, (75 - 44)/2*s, 'h', 18*s, 'v', 44*s, 'H', 0].map(roundNum).join(' '),}) - chalkSel.append('path.white') - .at({d: ['M', width, (75 - 44)/2*s, 'h', -18*s, 'v', 44*s, 'H', width].map(roundNum).join(' '),}) - - var drag = d3.drag() - .on('drag', function(d){ - stopAnimations() - if (name === 'regression-leak') { - window.loopInterval.stop() - } - - d[0] = Math.round(Math.max(0, Math.min(width, d3.event.x))) - d[1] = Math.round(Math.max(0, Math.min(height, d3.event.y))) - - players.renderAll() - }) - .subject(function(d){ return {x: d[0], y: d[1]} }) - - - var players = name == 'regression-leak' ? globalPlayersH : globalPlayers - - if (isRegression){ - var byColor = d3.nestBy(players, d => d.color) - var regressionSel = c.svg.appendMany('path', byColor) - .at({stroke: d => color2dark[d.key], strokeWidth: 3.5, strokeDasharray: '4 4'}) - .each(function(d){ d.sel = d3.select(this) }) - } - - var bgPlayerSel = c.svg.appendMany('circle.player', players) - .at({r: 15, fill: d => d.color, opacity: 0}) - .translate(d => d) - .call(drag) - - var playerSel = c.svg.appendMany('circle.player', players) - .at({r: 5, fill: d => d.color, opacity: isVisiblePoints ? 1 : 0}) - .translate(d => d) - .call(drag) - - var textSel = c.svg.appendMany('text.chart-title', name == 'playerless' ? [players[0], players[20]] : [players[0]]) - .text(name == 'regression-leak' || name == 'scatter' ? 'New Hire' : name == 'playerless' ? 'Goalie' : '') - .st({pointerEvent: 'none'}) - .at({dy: '.33em', opacity: isVisiblePoints ? 1 : 0, dx: (d, i) => i ? -8 : 8, textAnchor: (d, i) => i ? 'end' : 'start'}) - - if (name == 'scatter' || isRegression){ - sel.st({marginBottom: marginBottom + 70}) - sel.insert('div.axis.chart-title', ':first-child') - .html(` - Men's - and - Women's - Salaries`) - .st({marginBottom: 10, fontSize: 16}) - - c.x.domain([0, 20]) - c.y.domain([40000, 90000]) - - c.xAxis.ticks(5) - c.yAxis.ticks(5).tickFormat(d => { - var rv = d3.format(',')(d).replace('9', '$9') - if (isMobile){ - rv = rv.replace(',000', 'k').replace('40k', '') - } - - return rv - }) - - - - chalkSel.selectAll('*').remove() - chalkSel.appendMany('path.white', c.x.ticks(5)) - .at({d: d => ['M', Math.round(c.x(d)), '0 V ', c.height].join(' ')}) - - chalkSel.appendMany('path.white', c.y.ticks(5)) - .at({d: d => ['M 0', Math.round(c.y(d)), 'H', c.width].join(' ')}) - - d3.drawAxis(c) - c.svg.selectAll('.axis').lower() - if (isMobile){ - c.svg.selectAll('.y text') - .translate([35, 10]) - .st({fill: name == 'scatter' ? '#000' : ''}) - - c.svg.selectAll('.x text').filter(d => d == 20).at({textAnchor: 'end'}) - c.svg.selectAll('.x text').filter(d => d == 0).at({textAnchor: 'start'}) - } - - - c.svg.select('.x').append('text.chart-title') - .text('Years at Company →') - .translate([c.width/2, 43]) - .at({textAnchor: 'middle'}) - } - - - - render() - players.renderFns.push(render) - function render(){ - renderSVG() - if (name != 'grass' && !isRegression)renderCanvas() - if (isRegression) renderRegression() - } - - function renderSVG(){ - if (playerSel){ - playerSel.translate(d => d) - bgPlayerSel.translate(d => d) - textSel.translate(d => d) - } - } - - function renderCanvas(){ - cells.forEach(d => { - players.forEach(p => { - var dx = p[0] - d[0] - cs/2 - var dy = p[1] - d[1] - cs/2 - - // p.dist = Math.sqrt(dx*dx + dy*dy) - // p.dist = dx*dx + dy*dy - p.dist = Math.pow(dx*dx + dy*dy, 1.5) + .00001 - p.weight = 1/p.dist - - return p.dist - }) - - var sum = d3.sum(players, d => d.isRed*d.weight) - var wsum = d3.sum(players, d => d.weight) - - ctx.fillStyle = colorScale(1 - sum/wsum) - - ctx.fillRect(d[0], d[1], cs, cs) - }) - } - - function renderRegression(){ - byColor.forEach(d => { - var l = ss.linearRegressionLine(ss.linearRegression(d)) - - var x0 = 0 - var x1 = c.width - - d.sel.at({d: `M ${x0} ${l(x0)} L ${x1} ${l(x1)}`}) - }) - } -} - -'grass prediction playerless scatter regression regression-leak' - .split(' ') - .forEach(initField) - - diff --git a/spaces/merve/measuring-fairness/public/hidden-bias/script.js b/spaces/merve/measuring-fairness/public/hidden-bias/script.js deleted file mode 100644 index 526901a0178a3ef069380410dd33fdc0334f2bae..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/public/hidden-bias/script.js +++ /dev/null @@ -1,467 +0,0 @@ -/* Copyright 2020 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - -var ttSel = d3.select('body').selectAppend('div.tooltip.tooltip-hidden') - -var colors = { - m: '#7DDAD3', - f: '#9B86EF', - h: '#F0BD80', - l: '#FF777B', - grey: '#ccc', -} - - -var totalWidth = width = d3.select('#graph').node().offsetWidth -var r = 40 - -var sel = d3.select('#graph').html('') - .append('div') - -var extraWidth = d3.clamp(500, innerHeight - 150, innerWidth - 500) -var scale = extraWidth/500 -scale = 1 -sel.st({transform: `scale(${scale})`, transformOrigin: '0% 0%'}) - -var c = d3.conventions({ - sel, - totalWidth, - totalHeight: totalWidth, - margin: {left: 25, right: 7}, - layers: 'sd', -}) -var divSel = c.layers[1] - -c.x.domain([1, 4]).clamp(true).interpolate(d3.interpolateRound) -c.y.domain([1, 4]).clamp(true).interpolate(d3.interpolateRound) - -c.xAxis.ticks(3).tickFormat(d3.format('.1f')) -c.yAxis.ticks(3).tickFormat(d3.format('.1f')) -d3.drawAxis(c) - -var axis2Sel= c.svg.append('g.axis').append('line') - .translate(Math.round(c.y(2)) + .5, 1) - .at({x2: c.width, stroke: '#000', opacity: 0}) - -var meanGPADiff = .6 - -var seed = new Math.seedrandom('hii') -var students = d3.range(150).map((d, index) => { - var collegeGPA = d3.randomUniform.source(seed)(1, 4)() - - // if (index == 93) collegeGPA = 2.05 - // if (index == 87) collegeGPA = 2.15 - // if (index == 32) collegeGPA = 2.25 - if (index == 131) collegeGPA = 3.9 - - // var hsGPA = collegeGPA*d3.randomNormal(1, .4)() - var hsGPA = collegeGPA + d3.randomNormal.source(seed)(meanGPADiff, .8)() - var hsGPAadjusted = hsGPA - meanGPADiff - - var rand = d3.randomUniform.source(seed)(0, 1) - - var isMale = rand() < .5 - var name = names[isMale ? 'm' : 'f'][Math.floor(d/2)] - var lastName = names.last[d] - var maleOffset = rand()*(isMale ? 1 : -1)*.6 - - // if (index == 47) name = 'Mia' - // if (index == 82) name = 'Mason' - - - var compGPA0 = lerp(hsGPAadjusted, collegeGPA, rand()*.7) + maleOffset - var compGPA1 = lerp(compGPA0, collegeGPA + maleOffset, rand()*1.1) - var compGPA2 = compGPA1 + rand()/4 - 1/4/2 - // var compGPA0 = collegeGPA + d3.randomNormal.source(seed)(0, .5)() - // var compGPA1 = collegeGPA + d3.randomNormal.source(seed)(0, .3)() - - if (index == 69){ - compGPA1 = 2.0 - } - if (index == 37){ - compGPA1 = 2.0 - } - - - var isLowIncome = rand() < .5 - - var inteviewGPA = collegeGPA + d3.randomNormal.source(seed)(0, .15)() - var inteviewGPAbias = inteviewGPA + rand()*(isLowIncome ? -1 : 1)*.5 - - // if (index == 115) name = 'Mason' - // if (index == 32) name = 'Mia' - - if (name == 'Camila') name = 'Mia' - - - return {name, index, lastName, collegeGPA, hsGPA, hsGPAadjusted, compGPA0, compGPA1, compGPA2, isMale, isLowIncome, inteviewGPA, inteviewGPAbias} -}) - -students = _.sortBy(students, d => d.collegeGPA) - -students = students.filter(d => { - return d3.entries(d).every(({key, value}) => { - if (!key.includes('GPA')) return true - - return 1 < value && value < 4.0 - }) -}) - - -c.svg.append('path') - .at({ - d: ['M', 0, c.height, 'L', c.width, 0].join(' '), - stroke: '#ccc', - strokeWidth: 2, - strokeDasharray: '4 2' - }) - -!(function(){ - // return window.annotationSel = d3.select(null) - var isDrag = 0 - if (!isDrag) annotations.forEach(d => d.text = d.html ? '' : d.text) - if (isDrag){ - d3.select('#sections').st({pointerEvents: 'none'}) - } - - // copy('window.annotations = ' + JSON.stringify(annotations, null, 2)) - var swoopy = d3.swoopyDrag() - .x(d => c.x(d.x)) - .y(d => c.y(d.y)) - .draggable(isDrag) - .annotations(annotations) - .on('drag', d => { - - }) - - - var htmlAnnoSel = divSel.appendMany('div.annotation', annotations.filter(d => d.html)) - .translate(d => [c.x(d.x), c.y(d.y)]).st({position: 'absolute', opacity: 0}) - .append('div') - .translate(d => d.textOffset) - .html(d => d.html) - .st({width: 150}) - - - - var swoopySel = c.svg.append('g.annotations').call(swoopy) - - c.svg.append('marker') - .attr('id', 'arrow') - .attr('viewBox', '-10 -10 20 20') - .attr('markerWidth', 20) - .attr('markerHeight', 20) - .attr('orient', 'auto') - .append('path') - .attr('d', 'M-6.75,-6.75 L 0,0 L -6.75,6.75') - - swoopySel.selectAll('path') - .attr('marker-end', 'url(#arrow)') - .st({'opacity': d => d.path == 'M 0 0' ? 0 : 1}) - window.annotationSel = swoopySel.selectAll('g') - .st({fontSize: 12, opacity: d => d.slide == 0 ? 1 : 0}) - - window.annotationSel = d3.selectAll('g.annotations g, div.annotation') - - swoopySel.selectAll('text') - .each(function(d){ - d3.select(this) - .text('') //clear existing text - .tspans(d3.wordwrap(d.text, d.width || 20), 13) //wrap after 20 char - }) - })() - - - -students = _.sortBy(students, d => d.collegeGPA) -var lineSel = c.svg.appendMany('path', students) - .translate(d => [c.x(d.hsGPA), c.y(d.collegeGPA)]) - .at({ - // fill: d => d.hsGPA > d.collegeGPA ? 'blue' : 'orange', - fill: '#eee', - stroke: '#aaa', - strokeWidth: .5, - opacity: 0, - // strokeWidth: 1/scale, - }) - - -var circleSel = c.svg.appendMany('g', students) - .translate(d => [c.x(d.collegeGPA), c.y(d.hsGPA)]) - .call(d3.attachTooltip) - .on('mouseover', d => { - var html = '' - html += `
              ${d.name} ${d.lastName}
              ` - - if (curSlide.circleFill == 'gender'){ - html += `${d.isMale ? 'Male' : 'Female'}` - } - - if (curSlide.circleFill == 'income'){ - html += `${d.isLowIncome ? 'Low Income' : 'High Income'}` - } - html += ` -
              ${d3.format('.2f')(d[curSlide.yKey]).slice(0, 4)} ${curSlide.index ? 'Predicted' : 'High School'} GPA
              -
              ${d3.format('.2f')(d.collegeGPA).slice(0, 4)} College GPA
              ` - - ttSel.html(html) - }) - - -var innerCircleSel = circleSel.append('circle') - .at({ - r: 5, - fill: '#eee', - stroke: '#aaa' - }) - -// var textSel = circleSel.append('text').text(d => d.isMale ? 'M' : 'F') -// .at({textAnchor: 'middle', dy: '.33em', fontSize: 8, fill: '#eee'}) -// var textSel2 = circleSel.append('text').text(d => d.isLowIncome ? 'L' : 'H') -// .at({textAnchor: 'middle', dy: '.33em', fontSize: 8, opacity: 0}) - - -c.svg.select('.y').selectAll('line').filter(d => d == 4) - .remove() -c.svg.select('.y').selectAll('text').filter(d => d == 4) - .select(function() { - return this.parentNode.insertBefore(this.cloneNode(1), this.nextSibling); - }) - .text('Actual College GPA') - .at({x: c.width/2, y: c.height + 35, textAnchor: 'middle', fontWeight: 800}) - -var yLabelSel = divSel.st({pointerEvents: 'none'}).append('div.axis') - .html('High School GPA') - .translate([0, -9]) - .st({textAlign: 'left', maxWidth: 260}) - -// c.svg.append('text').text('Actual College GPA').st({fontWeight: 800}) - -var longLabel = 'high school GPA, essay, clubs, zip code, teacher recommendations, sports, AP scores, demonstrated interest, gender, SAT scores, interviews, portfolio, race, work experience' - -var slides = [ - { - yKey: 'hsGPA', - isLineVisible: 0, - yLabel: 'High School GPA', - circleFill: 'grey', - circleFillDelay: d => 0, - }, - - { - yKey: 'hsGPA', - isLineVisible: true, - yLabel: 'High School GPA' - }, - - { - yKey: 'hsGPAadjusted', - yLabel: 'high school GPA' - }, - - { - yKey: 'compGPA0', - yLabel: 'high school GPA, essay, clubs, zip code'.replace('essay', 'essay') + '' - }, - - { - yKey: 'compGPA1', - yLabel: longLabel.replace('teacher', 'teacher') + '', - circleFill: 'grey', - circleFillDelay: d => 0, - textFill: '#eee', - }, - - { - yKey: 'compGPA1', - yLabel: longLabel, - circleFill: 'gender', - circleFillDelay: (d, i) => i*20 + (d.isMale ? 0 : 2000), - textFill: '#000', - }, - - { - name: 'proxyHighlight', - yKey: 'compGPA2', - yLabel: longLabel, - circleFill: 'gender', - circleFillDelay: d => 0, - textFill: '#000', - }, - - { - textFill: '#eee', - yLabel: 'Alumni interview', - yKey: 'inteviewGPAbias', - circleFill: 'grey', - text2Opacity: 0, - }, - - { - textFill: '#eee', - yLabel: 'Alumni interview', - yKey: 'inteviewGPAbias', - circleFill: 'income', - circleFillDelay: (d, i) => i*20 + (!d.isLowIncome ? 2000 : 0), - text2Opacity: 1, - }, - - { - textFill: '#eee', - yLabel: 'Alumni interview, household income'.replace('household', 'household') + '', - yKey: 'inteviewGPA', - text2Opacity: 1, - }, -] - -slides.forEach(d => { - if (d.name == 'proxyHighlight'){ - var proxies = 'clubs, interviews, portfolio, sports'.split(', ') - d.yLabel = d.yLabel - .split(', ') - .map(d => { - if (d == 'gender') return `gender` - if (!proxies.includes(d)) return d - - return `${d}` - }) - .join(', ') - } - - - if (d.yLabel[0] != '<') d.yLabel = 'Predicted College GPA using ' + d.yLabel.replace('School', 'school') -}) - -var keys = [] -slides.forEach(d => keys = keys.concat(d3.keys(d))) -_.uniq(keys).forEach(str => { - var prev = null - slides.forEach(d => { - if (typeof(d[str]) === 'undefined'){ - d[str] = prev - } - prev = d[str] - }) -}) - -slides.forEach((d, i) => { - d.circleFillFn = { - grey: d => '#eee', - gender: d => d.isMale ? colors.m : colors.f, - income: d => d.isLowIncome ? colors.l : colors.h, - }[d.circleFill] - - d.index = i -}) - - - - -var gs = d3.graphScroll() - .container(d3.select('.container-1')) - .graph(d3.selectAll('container-1 #graph')) - .eventId('uniqueId1') - .sections(d3.selectAll('.container-1 #sections > div')) - .offset(innerWidth < 900 ? 300 : 520) - .on('active', updateSlide) - - -var prevSlide = -1 -function updateSlide(i){ - var slide = slides[i] - if (!slide) return - curSlide = slide - var {yKey} = slide - - lineSel.transition('yKey').duration(500) - .at({ - d: d => [ - 'M 5 0', - 'C 0 0', - 0, c.y(d['collegeGPA']) - c.y(d[yKey]), - 0, c.y(d['collegeGPA']) - c.y(d[yKey]), - 'S 0 0 -5.5 0' - ].join(' ') - }) - .translate(d => [c.x(d.collegeGPA), c.y(d[yKey])]) - - - circleSel.transition('yKey').duration(500) - .translate(d => [c.x(d.collegeGPA), c.y(d[yKey])]) - - innerCircleSel.transition('colorFill').duration(30) - .delay(slide.circleFillDelay) - .at({ - fill: slide.circleFillFn, - stroke: d => d3.color(slide.circleFillFn(d)).darker(1.5) - }) - - axis2Sel.transition() - .st({opacity: i == 5 ? 1 : 0}) - - lineSel.transition('opacity').duration(500) - .st({ - opacity: slide.isLineVisible ? 1 : 0 - }) - - if (slide.yLabel) yLabelSel.html(slide.yLabel) - - - annotationSel.transition() - .st({opacity: d => i == d.slide ? 1 : 0}) - - - - prevSlide = i -} - -slide = slides[0] - - - - -d3.selectAll('.circle').each(function(){ - var d = d3.select(this).attr('class').split(' ')[0] - - d3.select(this) - .st({ - backgroundColor: d3.color(colors[d]), - borderColor: d3.color(colors[d]).darker(1.5), - }) - - -}) - - - - -function lerp(a, b, t){ return a + t*(b - a) } - - - -c.svg.selectAll('g.annotations').raise() - - - -d3.selectAll('#sections img').attr('aria-hidden', true) - - - - - - - - diff --git a/spaces/merve/uncertainty-calibration/public/third_party/topojson-server.js b/spaces/merve/uncertainty-calibration/public/third_party/topojson-server.js deleted file mode 100644 index 1dd21b5598fb337243b0e2be15d44d95e32ae03d..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/public/third_party/topojson-server.js +++ /dev/null @@ -1,2 +0,0 @@ -// https://github.com/topojson/topojson-server v3.0.1 Copyright 2019 Mike Bostock -!function(r,n){"object"==typeof exports&&"undefined"!=typeof module?n(exports):"function"==typeof define&&define.amd?define(["exports"],n):n((r=r||self).topojson=r.topojson||{})}(this,function(r){"use strict";var n=Object.prototype.hasOwnProperty;function t(r,n,t,e,o,i){3===arguments.length&&(e=i=Array,o=null);for(var a=new e(r=1<=r)throw new Error("full hashmap");l=a[c=c+1&f]}return a[c]=e,u[c]=i,i},maybeSet:function(e,i){for(var c=n(e)&f,l=a[c],s=0;l!=o;){if(t(l,e))return u[c];if(++s>=r)throw new Error("full hashmap");l=a[c=c+1&f]}return a[c]=e,u[c]=i,i},get:function(e,i){for(var c=n(e)&f,l=a[c],s=0;l!=o;){if(t(l,e))return u[c];if(++s>=r)break;l=a[c=c+1&f]}return i},keys:function(){for(var r=[],n=0,t=a.length;n>7^a[2]^a[3])}function f(r){var n,o,i,a,f=r.coordinates,c=r.lines,l=r.rings,s=function(){for(var r=t(1.4*f.length,A,E,Int32Array,-1,Int32Array),n=new Int32Array(f.length),e=0,o=f.length;e=0){var i=v[t];o===n&&i===e||o===e&&i===n||(++y,p[t]=1)}else g[t]=n,v[t]=e}}function A(r){return u(f[r])}function E(r,n){return e(f[r],f[n])}h=g=v=null;var L,S=function(r,n,t,e,o){3===arguments.length&&(e=Array,o=null);for(var i=new e(r=1<=r)throw new Error("full hashset");f=i[u=u+1&a]}return i[u]=e,!0},has:function(e){for(var u=n(e)&a,f=i[u],c=0;f!=o;){if(t(f,e))return!0;if(++c>=r)break;f=i[u=u+1&a]}return!1},values:function(){for(var r=[],n=0,t=i.length;n>1);no&&(o=n),ai&&(i=a)}function c(r){r.forEach(f)}function l(r){r.forEach(c)}for(var s in r)a(r[s]);return o>=t&&i>=e?[t,e,o,i]:void 0}(r=function(r){var n,t,e={};for(n in r)e[n]=null==(t=r[n])?{type:null}:("FeatureCollection"===t.type?function(r){var n={type:"GeometryCollection",geometries:r.features.map(l)};return null!=r.bbox&&(n.bbox=r.bbox),n}:"Feature"===t.type?l:s)(t);return e}(r)),a=o>0&&i&&function(r,t,e){var o=t[0],i=t[1],a=t[2],u=t[3],f=a-o?(e-1)/(a-o):1,c=u-i?(e-1)/(u-i):1;function l(r){return[Math.round((r[0]-o)*f),Math.round((r[1]-i)*c)]}function s(r,n){for(var t,e,a,u,l,s=-1,h=0,g=r.length,v=new Array(g);++s d.x) - .attr('y', d => d.y) - .attr('width', d => d.width) - .attr('height', d => d.height) - .attr('xlink:href', d => d.path) - .attr('alt', d => d.alt) - - - var buttonHeight = 35 - var buttonWidth = 130 - - var buttonSel = c.svg.appendMany('g.photo-button', data) - .translate((d,i) => [(i * 170) + 100, 0]) - .at({ - // class: "dropdown" - }) - .on('click', function(d, i){ - photoIndex = i - setActiveImage() - timer.stop(); - }) - - buttonSel.append('rect') - .at({ - height: buttonHeight, - width: buttonWidth, - // fill: '#fff' - }) - - buttonSel.append('text') - .at({ - textAnchor: 'middle', - // dominantBaseline: 'central', - dy: '.33em', - x: buttonWidth/2, - y: buttonHeight/2, - class: "monospace" - }) - .text((d,i) => 'ground truth ' + (i + 1)) - - // buttonSel.classed('dropdown', true); - - if (window.__photoPersonTimer) window.__photoPersonTimer.stop() - var timer = window.__photoPersonTimer = d3.interval(() => { - photoIndex = (photoIndex + 1) % data.length; - setActiveImage() - }, 2000) - - function setActiveImage(i){ - photoSel.st({opacity: (d, i) => i == photoIndex ? 1 : 0 }) - buttonSel.classed('is-active-button', (d, i) => i == photoIndex) - } - setActiveImage() -} - -createPhotoScroller(); - - - - diff --git a/spaces/mfrashad/CharacterGAN/models/stylegan2/stylegan2-pytorch/train.py b/spaces/mfrashad/CharacterGAN/models/stylegan2/stylegan2-pytorch/train.py deleted file mode 100644 index 7295f159b0427aef89a5944a0d1eb4c23ee85a7f..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/CharacterGAN/models/stylegan2/stylegan2-pytorch/train.py +++ /dev/null @@ -1,413 +0,0 @@ -import argparse -import math -import random -import os - -import numpy as np -import torch -from torch import nn, autograd, optim -from torch.nn import functional as F -from torch.utils import data -import torch.distributed as dist -from torchvision import transforms, utils -from tqdm import tqdm - -try: - import wandb - -except ImportError: - wandb = None - -from model import Generator, Discriminator -from dataset import MultiResolutionDataset -from distributed import ( - get_rank, - synchronize, - reduce_loss_dict, - reduce_sum, - get_world_size, -) - - -def data_sampler(dataset, shuffle, distributed): - if distributed: - return data.distributed.DistributedSampler(dataset, shuffle=shuffle) - - if shuffle: - return data.RandomSampler(dataset) - - else: - return data.SequentialSampler(dataset) - - -def requires_grad(model, flag=True): - for p in model.parameters(): - p.requires_grad = flag - - -def accumulate(model1, model2, decay=0.999): - par1 = dict(model1.named_parameters()) - par2 = dict(model2.named_parameters()) - - for k in par1.keys(): - par1[k].data.mul_(decay).add_(1 - decay, par2[k].data) - - -def sample_data(loader): - while True: - for batch in loader: - yield batch - - -def d_logistic_loss(real_pred, fake_pred): - real_loss = F.softplus(-real_pred) - fake_loss = F.softplus(fake_pred) - - return real_loss.mean() + fake_loss.mean() - - -def d_r1_loss(real_pred, real_img): - grad_real, = autograd.grad( - outputs=real_pred.sum(), inputs=real_img, create_graph=True - ) - grad_penalty = grad_real.pow(2).view(grad_real.shape[0], -1).sum(1).mean() - - return grad_penalty - - -def g_nonsaturating_loss(fake_pred): - loss = F.softplus(-fake_pred).mean() - - return loss - - -def g_path_regularize(fake_img, latents, mean_path_length, decay=0.01): - noise = torch.randn_like(fake_img) / math.sqrt( - fake_img.shape[2] * fake_img.shape[3] - ) - grad, = autograd.grad( - outputs=(fake_img * noise).sum(), inputs=latents, create_graph=True - ) - path_lengths = torch.sqrt(grad.pow(2).sum(2).mean(1)) - - path_mean = mean_path_length + decay * (path_lengths.mean() - mean_path_length) - - path_penalty = (path_lengths - path_mean).pow(2).mean() - - return path_penalty, path_mean.detach(), path_lengths - - -def make_noise(batch, latent_dim, n_noise, device): - if n_noise == 1: - return torch.randn(batch, latent_dim, device=device) - - noises = torch.randn(n_noise, batch, latent_dim, device=device).unbind(0) - - return noises - - -def mixing_noise(batch, latent_dim, prob, device): - if prob > 0 and random.random() < prob: - return make_noise(batch, latent_dim, 2, device) - - else: - return [make_noise(batch, latent_dim, 1, device)] - - -def set_grad_none(model, targets): - for n, p in model.named_parameters(): - if n in targets: - p.grad = None - - -def train(args, loader, generator, discriminator, g_optim, d_optim, g_ema, device): - loader = sample_data(loader) - - pbar = range(args.iter) - - if get_rank() == 0: - pbar = tqdm(pbar, initial=args.start_iter, dynamic_ncols=True, smoothing=0.01) - - mean_path_length = 0 - - d_loss_val = 0 - r1_loss = torch.tensor(0.0, device=device) - g_loss_val = 0 - path_loss = torch.tensor(0.0, device=device) - path_lengths = torch.tensor(0.0, device=device) - mean_path_length_avg = 0 - loss_dict = {} - - if args.distributed: - g_module = generator.module - d_module = discriminator.module - - else: - g_module = generator - d_module = discriminator - - accum = 0.5 ** (32 / (10 * 1000)) - - sample_z = torch.randn(args.n_sample, args.latent, device=device) - - for idx in pbar: - i = idx + args.start_iter - - if i > args.iter: - print("Done!") - - break - - real_img = next(loader) - real_img = real_img.to(device) - - requires_grad(generator, False) - requires_grad(discriminator, True) - - noise = mixing_noise(args.batch, args.latent, args.mixing, device) - fake_img, _ = generator(noise) - fake_pred = discriminator(fake_img) - - real_pred = discriminator(real_img) - d_loss = d_logistic_loss(real_pred, fake_pred) - - loss_dict["d"] = d_loss - loss_dict["real_score"] = real_pred.mean() - loss_dict["fake_score"] = fake_pred.mean() - - discriminator.zero_grad() - d_loss.backward() - d_optim.step() - - d_regularize = i % args.d_reg_every == 0 - - if d_regularize: - real_img.requires_grad = True - real_pred = discriminator(real_img) - r1_loss = d_r1_loss(real_pred, real_img) - - discriminator.zero_grad() - (args.r1 / 2 * r1_loss * args.d_reg_every + 0 * real_pred[0]).backward() - - d_optim.step() - - loss_dict["r1"] = r1_loss - - requires_grad(generator, True) - requires_grad(discriminator, False) - - noise = mixing_noise(args.batch, args.latent, args.mixing, device) - fake_img, _ = generator(noise) - fake_pred = discriminator(fake_img) - g_loss = g_nonsaturating_loss(fake_pred) - - loss_dict["g"] = g_loss - - generator.zero_grad() - g_loss.backward() - g_optim.step() - - g_regularize = i % args.g_reg_every == 0 - - if g_regularize: - path_batch_size = max(1, args.batch // args.path_batch_shrink) - noise = mixing_noise(path_batch_size, args.latent, args.mixing, device) - fake_img, latents = generator(noise, return_latents=True) - - path_loss, mean_path_length, path_lengths = g_path_regularize( - fake_img, latents, mean_path_length - ) - - generator.zero_grad() - weighted_path_loss = args.path_regularize * args.g_reg_every * path_loss - - if args.path_batch_shrink: - weighted_path_loss += 0 * fake_img[0, 0, 0, 0] - - weighted_path_loss.backward() - - g_optim.step() - - mean_path_length_avg = ( - reduce_sum(mean_path_length).item() / get_world_size() - ) - - loss_dict["path"] = path_loss - loss_dict["path_length"] = path_lengths.mean() - - accumulate(g_ema, g_module, accum) - - loss_reduced = reduce_loss_dict(loss_dict) - - d_loss_val = loss_reduced["d"].mean().item() - g_loss_val = loss_reduced["g"].mean().item() - r1_val = loss_reduced["r1"].mean().item() - path_loss_val = loss_reduced["path"].mean().item() - real_score_val = loss_reduced["real_score"].mean().item() - fake_score_val = loss_reduced["fake_score"].mean().item() - path_length_val = loss_reduced["path_length"].mean().item() - - if get_rank() == 0: - pbar.set_description( - ( - f"d: {d_loss_val:.4f}; g: {g_loss_val:.4f}; r1: {r1_val:.4f}; " - f"path: {path_loss_val:.4f}; mean path: {mean_path_length_avg:.4f}" - ) - ) - - if wandb and args.wandb: - wandb.log( - { - "Generator": g_loss_val, - "Discriminator": d_loss_val, - "R1": r1_val, - "Path Length Regularization": path_loss_val, - "Mean Path Length": mean_path_length, - "Real Score": real_score_val, - "Fake Score": fake_score_val, - "Path Length": path_length_val, - } - ) - - if i % 100 == 0: - with torch.no_grad(): - g_ema.eval() - sample, _ = g_ema([sample_z]) - utils.save_image( - sample, - f"sample/{str(i).zfill(6)}.png", - nrow=int(args.n_sample ** 0.5), - normalize=True, - range=(-1, 1), - ) - - if i % 10000 == 0: - torch.save( - { - "g": g_module.state_dict(), - "d": d_module.state_dict(), - "g_ema": g_ema.state_dict(), - "g_optim": g_optim.state_dict(), - "d_optim": d_optim.state_dict(), - }, - f"checkpoint/{str(i).zfill(6)}.pt", - ) - - -if __name__ == "__main__": - device = "cuda" - - parser = argparse.ArgumentParser() - - parser.add_argument("path", type=str) - parser.add_argument("--iter", type=int, default=800000) - parser.add_argument("--batch", type=int, default=16) - parser.add_argument("--n_sample", type=int, default=64) - parser.add_argument("--size", type=int, default=256) - parser.add_argument("--r1", type=float, default=10) - parser.add_argument("--path_regularize", type=float, default=2) - parser.add_argument("--path_batch_shrink", type=int, default=2) - parser.add_argument("--d_reg_every", type=int, default=16) - parser.add_argument("--g_reg_every", type=int, default=4) - parser.add_argument("--mixing", type=float, default=0.9) - parser.add_argument("--ckpt", type=str, default=None) - parser.add_argument("--lr", type=float, default=0.002) - parser.add_argument("--channel_multiplier", type=int, default=2) - parser.add_argument("--wandb", action="store_true") - parser.add_argument("--local_rank", type=int, default=0) - - args = parser.parse_args() - - n_gpu = int(os.environ["WORLD_SIZE"]) if "WORLD_SIZE" in os.environ else 1 - args.distributed = n_gpu > 1 - - if args.distributed: - torch.cuda.set_device(args.local_rank) - torch.distributed.init_process_group(backend="nccl", init_method="env://") - synchronize() - - args.latent = 512 - args.n_mlp = 8 - - args.start_iter = 0 - - generator = Generator( - args.size, args.latent, args.n_mlp, channel_multiplier=args.channel_multiplier - ).to(device) - discriminator = Discriminator( - args.size, channel_multiplier=args.channel_multiplier - ).to(device) - g_ema = Generator( - args.size, args.latent, args.n_mlp, channel_multiplier=args.channel_multiplier - ).to(device) - g_ema.eval() - accumulate(g_ema, generator, 0) - - g_reg_ratio = args.g_reg_every / (args.g_reg_every + 1) - d_reg_ratio = args.d_reg_every / (args.d_reg_every + 1) - - g_optim = optim.Adam( - generator.parameters(), - lr=args.lr * g_reg_ratio, - betas=(0 ** g_reg_ratio, 0.99 ** g_reg_ratio), - ) - d_optim = optim.Adam( - discriminator.parameters(), - lr=args.lr * d_reg_ratio, - betas=(0 ** d_reg_ratio, 0.99 ** d_reg_ratio), - ) - - if args.ckpt is not None: - print("load model:", args.ckpt) - - ckpt = torch.load(args.ckpt, map_location=lambda storage, loc: storage) - - try: - ckpt_name = os.path.basename(args.ckpt) - args.start_iter = int(os.path.splitext(ckpt_name)[0]) - - except ValueError: - pass - - generator.load_state_dict(ckpt["g"]) - discriminator.load_state_dict(ckpt["d"]) - g_ema.load_state_dict(ckpt["g_ema"]) - - g_optim.load_state_dict(ckpt["g_optim"]) - d_optim.load_state_dict(ckpt["d_optim"]) - - if args.distributed: - generator = nn.parallel.DistributedDataParallel( - generator, - device_ids=[args.local_rank], - output_device=args.local_rank, - broadcast_buffers=False, - ) - - discriminator = nn.parallel.DistributedDataParallel( - discriminator, - device_ids=[args.local_rank], - output_device=args.local_rank, - broadcast_buffers=False, - ) - - transform = transforms.Compose( - [ - transforms.RandomHorizontalFlip(), - transforms.ToTensor(), - transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True), - ] - ) - - dataset = MultiResolutionDataset(args.path, transform, args.size) - loader = data.DataLoader( - dataset, - batch_size=args.batch, - sampler=data_sampler(dataset, shuffle=True, distributed=args.distributed), - drop_last=True, - ) - - if get_rank() == 0 and wandb is not None and args.wandb: - wandb.init(project="stylegan 2") - - train(args, loader, generator, discriminator, g_optim, d_optim, g_ema, device) diff --git a/spaces/mikeee/nousresearch-nous-hermes-llama2-13b-ggml/README.md b/spaces/mikeee/nousresearch-nous-hermes-llama2-13b-ggml/README.md deleted file mode 100644 index 376fe46f50dbbbdfd3479875bb70be37f07d81dd..0000000000000000000000000000000000000000 --- a/spaces/mikeee/nousresearch-nous-hermes-llama2-13b-ggml/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: nous-hermes-llama2-13b-ggml -emoji: 🚀 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 3.37.0 -app_file: app.py -pinned: true -duplicated_from: mikeee/llama2-7b-chat-uncensored-ggml ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mikeee/radiobee-aligner/radiobee/text2lists.py b/spaces/mikeee/radiobee-aligner/radiobee/text2lists.py deleted file mode 100644 index f590f4778389d3d0476c107ae44c84f8b124d9b4..0000000000000000000000000000000000000000 --- a/spaces/mikeee/radiobee-aligner/radiobee/text2lists.py +++ /dev/null @@ -1,153 +0,0 @@ -"""Separate text to zh en lists.""" -# pylint: disable=unused-import, too-many-locals, invalid-name, too-many-branches, too-many-statements, - - -# from typing import Tuple, -from typing import Iterable, List, Optional, Tuple, Union # noqa - -import numpy as np - -# from fastlid import fastlid -from polyglot.text import Detector -from logzero import logger - -from radiobee.lists2cmat import lists2cmat -from radiobee.detect import detect - - -def text2lists( - text: Union[Iterable[str], str], - set_languages: Optional[List[str]] = None, -) -> Tuple[List[str], List[str]]: - """Separate text to zh en lists. - - Args: - text: mixed text - set_languages: no default (open-end) - use polyglot.text.Detector to pick two languages - - Attributes: - cmat: correlation matrix (len(list_l) x len(list_r)) - before adjusting (shifting) - offset: plus, [""] * offset + list2 - minus, [""] * (-offset) + list1 - Returns: - two lists, best effort alignment - """ - if not isinstance(text, str) and isinstance(text, Iterable): - try: - text = "\n".join(text) - except Exception as e: - logger.error(e) - raise - - # set_languages default to ["en", "zh"] - if set_languages is None: - lang12 = [elm.code for elm in Detector(text).languages] - - # set_languages = ["en", "zh"] - - # set 'un' to 'en' - # set_languages = ['en' if elm in ['un'] else elm for elm in lang12[:2]] - set_languages = [] - for elm in lang12[:2]: - if elm in ["un"]: - logger.warning(" Unknown language, set to en") - set_languages.append("en") - else: - set_languages.append(elm) - - # fastlid.set_languages = set_languages - - list1 = [] - list2 = [] - - # lang0, _ = fastlid(text[:15000]) - lang0 = detect(text, set_languages) - - res = [] - left = True # start with left list1 - - for elm in [_ for _ in text.splitlines() if _.strip()]: - # lang, _ = fastlid(elm) - lang = detect(elm, set_languages) - if lang == lang0: - res.append(elm) - else: - if left: - # list1.append("\n".join(res)) - list1.extend(res) - else: - # list2.append("\n".join(res)) - list2.extend(res) - left = not left - - res = [elm] - lang0 = lang - - # process the last - if left: - list1.extend(res) - else: - list2.extend(res) - - try: - # lang1, _ = fastlid(' '.join(list1)) - lang1 = detect(" ".join(list1), set_languages) - except Exception as exc: - logger.error(exc) - lang1 = "en" - try: - # lang2, _ = fastlid(' '.join(list2)) - lang2 = detect(" ".join(list2), set_languages) - except Exception as exc: - logger.error(exc) - lang2 = "en" - - # find offset via diagonal(k), - len1, len2 = len(list1), len(list2) - - # len2, len1 = cmat.shape - # len_r, len_c = cmat.shape - # ylim, xlim = cmat.shape - ylim, xlim = len2, len1 # check - - # cmat dim: len1 x len2 or ylim x xlim - cmat = lists2cmat(list1, list2, lang1, lang2) - - # sq_mean_pair = [(elm, np.square(cmat.diagonal(elm)).mean()) for elm in range(2 - ylim, xlim + 1)] - # df = pd.DataFrame(sq_mean_pair, columns=['offset', 'sq_mean']) - # df.plot.scatter('offset', 'sq_mean') - # optimum_offset = df.offset[df.sq_mean.argmax()] - - # equiv to np.argmax(sq_mean) - (ylim - 2) - # locate max, -ylim + 2 ...xlim: range(1 - ylim, xlim) - # sqare sum - - sq_mean = [np.square(cmat.diagonal(elm)).mean() for elm in range(1 - ylim, xlim - 1)] - # tot: xlim + ylim - 1 - - # temp = [np.square(cmat.diagonal(elm)) for elm in range(2 - ylim, xlim + 1)] - # sq_mean = [elm.mean() if np.any(elm) else 0.0 for elm in temp] - - # plt.figure() - # plt.scatter(range(1 - ylim, xlim), sq_mean) - - offset = np.argmax(sq_mean) - (ylim - 1) - - text2lists.cmat = cmat - text2lists.offset = offset - text2lists.lang1 = lang1 - text2lists.lang2 = lang2 - - # shift list1 if offsset >= 0, else shift list2 - if offset > -1: - # list1a = list1[:] - # list2a = [""] * offset + list2 - list2 = [""] * offset + list2 - else: - list1 = [""] * (-offset) + list1 - # list1a = [""] * (-offset) + list1 - # list2a = list2[:] - - return list1, list2 diff --git a/spaces/mmlab-ntu/Segment-Any-RGBD/datasets/DATASETS.md b/spaces/mmlab-ntu/Segment-Any-RGBD/datasets/DATASETS.md deleted file mode 100644 index 30d30ba314c9842098c5c38d0a47ce780283d9d9..0000000000000000000000000000000000000000 --- a/spaces/mmlab-ntu/Segment-Any-RGBD/datasets/DATASETS.md +++ /dev/null @@ -1,122 +0,0 @@ -## Prepare Datasets for OVSeg - -This doc is a modification/extension of [MaskFormer](https://github.com/facebookresearch/MaskFormer/blob/main/datasets/README.md) following [Detectron2 fromat](https://detectron2.readthedocs.io/en/latest/tutorials/datasets.html). - -A dataset can be used by accessing [DatasetCatalog](https://detectron2.readthedocs.io/modules/data.html#detectron2.data.DatasetCatalog) -for its data, or [MetadataCatalog](https://detectron2.readthedocs.io/modules/data.html#detectron2.data.MetadataCatalog) for its metadata (class names, etc). -This document explains how to setup the builtin datasets so they can be used by the above APIs. -[Use Custom Datasets](https://detectron2.readthedocs.io/tutorials/datasets.html) gives a deeper dive on how to use `DatasetCatalog` and `MetadataCatalog`, -and how to add new datasets to them. - -OVSeg has builtin support for a few datasets. -The datasets are assumed to exist in a directory specified by the environment variable -`DETECTRON2_DATASETS`. -Under this directory, detectron2 will look for datasets in the structure described below, if needed. -``` -$DETECTRON2_DATASETS/ - coco/ # COCOStuff-171 - ADEChallengeData2016/ # ADE20K-150 - ADE20K_2021_17_01/ # ADE20K-847 - VOCdevkit/ - VOC2012/ # PASCALVOC-20 - VOC2010/ # PASCALContext-59, PASCALContext-459 -``` - -You can set the location for builtin datasets by `export DETECTRON2_DATASETS=/path/to/datasets`. -If left unset, the default is `./datasets` relative to your current working directory. - -Without specific notifications, our model is trained on COCOStuff-171 and evlauted on ADE20K-150, ADE20K-847, PASCALVOC-20, PASCALContext-59 and PASCALContext-459. - -| dataset | split | # images | # categories | -|:--------------:|:---------:|:--------:|:------------:| -| COCO Stuff | train2017 | 118K | 171 | -| ADE20K | val | 2K | 150/847 | -| Pascal VOC | val | 1.5K | 20 | -| Pascal Context | val | 5K | 59/459 | - - -### Expected dataset structure for [COCO Stuff](https://github.com/nightrome/cocostuff): -``` -coco/ - train2017/ # http://images.cocodataset.org/zips/train2017.zip - annotations/ # http://images.cocodataset.org/annotations/annotations_trainval2017.zip - stuffthingmaps/ - stuffthingmaps_trainval2017.zip # http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/stuffthingmaps_trainval2017.zip - train2017/ - # below are generated - stuffthingmaps_detectron2/ - train2017/ -``` - -The directory `stuffthingmaps_detectron2` is generated by running `python datasets/prepare_coco_stuff_sem_seg.py`. - - - -### Expected dataset structure for [ADE20k Scene Parsing (ADE20K-150)](http://sceneparsing.csail.mit.edu/): -``` -ADEChallengeData2016/ - annotations/ - images/ - objectInfo150.txt - # below are generated - annotations_detectron2/ -``` -The directory `annotations_detectron2` is generated by running `python datasets/prepare_ade20k_sem_seg.py`. - - -### Expected dataset structure for [ADE20k-Full (ADE20K-847)](https://github.com/CSAILVision/ADE20K#download): -``` -ADE20K_2021_17_01/ - images/ - index_ade20k.pkl - objects.txt - # below are generated - images_detectron2/ - annotations_detectron2/ -``` -The directories `images_detectron2` and `annotations_detectron2` are generated by running `python datasets/prepare_ade20k_full_sem_seg.py`. - -### Expected dataset structure for [Pascal VOC 2012 (PASCALVOC-20)](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/#devkit): -``` -VOCdevkit/VOC2012/ - Annotations/ - ImageSets/ - JPEGImages/ - SegmentationClass/ - SegmentationObject/ - SegmentationClassAug/ # https://github.com/kazuto1011/deeplab-pytorch/blob/master/data/datasets/voc12/README.md - # below are generated - images_detectron2/ - annotations_detectron2/ -``` - -It starts with a tar file `VOCtrainval_11-May-2012.tar`. - -We use SBD augmentated training data as `SegmentationClassAug` following [Deeplab](https://github.com/kazuto1011/deeplab-pytorch/blob/master/data/datasets/voc12/README.md) - -The directories `images_detectron2` and `annotations_detectron2` are generated by running `python datasets/prepare_voc_sem_seg.py`. - - -### Expected dataset structure for [Pascal Context](https://www.cs.stanford.edu/~roozbeh/pascal-context/): - -``` -VOCdevkit/VOC2010/ - Annotations/ - ImageSets/ - JPEGImages/ - SegmentationClass/ - SegmentationObject/ - # below are from https://www.cs.stanford.edu/~roozbeh/pascal-context/trainval.tar.gz - trainval/ - labels.txt - 59_labels.txt # https://www.cs.stanford.edu/~roozbeh/pascal-context/59_labels.txt - pascalcontext_val.txt # https://drive.google.com/file/d/1BCbiOKtLvozjVnlTJX51koIveUZHCcUh/view?usp=sharing - # below are generated - annotations_detectron2/ - pc459_val - pc59_val -``` -It starts with a tar file `VOCtrainval_03-May-2010.tar`. You may want to download the 5K validation set [here](https://drive.google.com/file/d/1BCbiOKtLvozjVnlTJX51koIveUZHCcUh/view?usp=sharing). - -The directory `annotations_detectron2` is generated by running `python datasets/prepare_pascal_context.py`. - diff --git a/spaces/momegas/megabots/README.md b/spaces/momegas/megabots/README.md deleted file mode 100644 index 270063909b10a4672bc588f193574687c3da7669..0000000000000000000000000000000000000000 --- a/spaces/momegas/megabots/README.md +++ /dev/null @@ -1,266 +0,0 @@ ---- -title: 🤖 Megabots -emoji: 🤖 -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: mit -python_version: 3.10.0 ---- - -# 🤖 Megabots - -[![Tests](https://github.com/momegas/qnabot/actions/workflows/python-package.yml/badge.svg)](https://github.com/momegas/qnabot/actions/workflows/python-package.yml) -[![Python Version](https://img.shields.io/badge/python-%203.10%20-blue.svg)](#supported-python-versions) -[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) -[![License](https://img.shields.io/badge/License-MIT-informational.svg)](https://github.com/momegas/megabots/blob/main/LICENCE) -![](https://dcbadge.vercel.app/api/server/zkqDWk5S7P?style=flat&n&compact=true) - -🤖 Megabots provides State-of-the-art, production ready LLM apps made mega-easy, so you don't have to build them from scratch 🤯 Create a bot, now 🫵 - -- 👉 Join us on Discord: https://discord.gg/zkqDWk5S7P -- ✈️ Work is managed in this project: https://github.com/users/momegas/projects/5/views/2 -- 🤖 Documentation bot: https://huggingface.co/spaces/momegas/megabots - -**The Megabots library can be used to create bots that:** - -- ⌚️ are production ready, in minutes -- 🗂️ can answer questions over documents -- 💾 can connect to vector databases -- 🎖️ automatically expose the bot as a rebust API using FastAPI (early release) -- 🏓 automatically expose the bot as a UI using Gradio - -**Coming soon:** - -- 🗣️ accept voice as an input using [whisper](https://github.com/openai/whisper) -- 👍 validate and correct the outputs of LLMs using [guardrails](https://github.com/ShreyaR/guardrails) -- 💰 semanticly cache LLM Queries and reduce Costs by 10x using [GPTCache](https://github.com/zilliztech/GPTCache) -- 🏋️ mega-easy LLM training -- 🚀 mega-easy deployment - -🤖 Megabots is backed by some of the most famous tools for productionalising AI. It uses [LangChain](https://docs.langchain.com/docs/) for managing LLM chains, [FastAPI](https://fastapi.tiangolo.com/) to create a production ready API, [Gradio](https://gradio.app/) to create a UI. At the moment it uses [OpenAI](https://openai.com/) to generate answers, but we plan to support other LLMs in the future. - -## Getting started - -Note: This is a work in progress. The API might change. - -```bash -pip install megabots -``` - -```python -from megabots import bot -import os - -os.environ["OPENAI_API_KEY"] = "my key" - -# Create a bot 👉 with one line of code. Automatically loads your data from ./index or index.pkl. -# Keep in mind that you need to have one or another. -qnabot = bot("qna-over-docs") - -# Ask a question -answer = bot.ask("How do I use this bot?") - -# Save the index to save costs (GPT is used to create the index) -bot.save_index("index.pkl") - -# Load the index from a previous run -qnabot = bot("qna-over-docs", index="./index.pkl") - -# Or create the index from a directory of documents -qnabot = bot("qna-over-docs", index="./index") - -# Change the model -qnabot = bot("qna-over-docs", model="text-davinci-003") -``` - -## Changing the bot's prompt - -You can change the bots promnpt to customize it to your needs. In the `qna-over-docs` type of bot you will need to pass 2 variables for the `context` (knwoledge searched from the index) and the `question` (the human question). - -```python -from megabots import bot - -prompt = """ -Use the following pieces of context to answer the question at the end. -If you don't know the answer, just say that you don't know, don't try to make up an answer. -Answer in the style of Tony Stark. - -{context} - -Question: {question} -Helpful humorous answer:""" - -qnabot = bot("qna-over-docs", index="./index.pkl", prompt=prompt) - -qnabot.ask("what was the first roster of the avengers?") -``` - -## Working with memory - -You can easily add memory to your `bot` using the `memory` parameter. It accepts a string with the type of the memory to be used. This defaults to some sane dafaults. -Should you need more configuration, you can use the `memory` function and pass the type of memory and the configuration you need. - -```python -from megabots import bot - -qnabot = bot("qna-over-docs", index="./index.pkl", memory="conversation-buffer") - -print(qnabot.ask("who is iron man?")) -print(qnabot.ask("was he in the first roster?")) -# Bot should understand who "he" refers to. -``` - -Or using the `memory`factory function - -```python -from megabots import bot, memory - -mem("conversation-buffer-window", k=5) - -qnabot = bot("qna-over-docs", index="./index.pkl", memory=mem) - -print(qnabot.ask("who is iron man?")) -print(qnabot.ask("was he in the first roster?")) -``` - -NOTE: For the `qna-over-docs` bot, when using memory and passing your custom prompt, it is important to remember to pass one more variable to your custom prompt to facilitate for chat history. The variable name is `history`. - -```python -from megabots import bot - -prompt = """ -Use the following pieces of context to answer the question at the end. -If you don't know the answer, just say that you don't know, don't try to make up an answer. - -{context} - -{history} -Human: {question} -AI:""" - -qnabot = bot("qna-over-docs", prompt=prompt, index="./index.pkl", memory="conversation-buffer") - -print(qnabot.ask("who is iron man?")) -print(qnabot.ask("was he in the first roster?")) -``` - -## Using Megabots with Milvus (more DBs comming soon) - -Megabots `bot` can also use Milvus as a backend for its search engine. You can find an example of how to do it below. - -In order to run Milvus you need to follow [this guide](https://milvus.io/docs/example_code.md) to download a docker compose file and run it. -The command is: - -```bash -wget https://raw.githubusercontent.com/milvus-io/pymilvus/v2.2.7/examples/hello_milvus.py -``` - -You can then [install Attu](https://milvus.io/docs/attu_install-docker.md) as a management tool for Milvus - -```python -from megabots import bot - -# Attach a vectorstore by passing the name of the database. Default port for milvus is 19530 and default host is localhost -# Point it to your files directory so that it can index the files and add them to the vectorstore -bot = bot("qna-over-docs", index="./examples/files/", vectorstore="milvus") - -bot.ask("what was the first roster of the avengers?") -``` - -Or use the `vectorstore` factory function for more customisation - -```python - -from megabots import bot, vectorstore - -milvus = vectorstore("milvus", host="localhost", port=19530) - -bot = bot("qna-over-docs", index="./examples/files/", vectorstore=milvus) -``` - -## Exposing an API with FastAPI - -You can also create a FastAPI app that will expose the bot as an API using the create_app function. -Assuming you file is called `main.py` run `uvicorn main:app --reload` to run the API locally. -You should then be able to visit `http://localhost:8000/docs` to see the API documentation. - -```python -from megabots import bot, create_api - -app = create_app(bot("qna-over-docs")) -``` - -## Exposing a Gradio chat-like interface - -You can expose a gradio UI for the bot using `create_interface` function. -Assuming your file is called `ui.py` run `gradio qnabot/ui.py` to run the UI locally. -You should then be able to visit `http://127.0.0.1:7860` to see the API documentation. - -```python -from megabots import bot, create_interface - -demo = create_interface(bot("qna-over-docs")) -``` - -## Customising bot - -The `bot` function should serve as the starting point for creating and customising your bot. Below is a list of the available arguments in `bot`. - -| Argument | Description | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| task | The type of bot to create. Available options: `qna-over-docs`. More comming soon | -| index | Specifies the index to use for the bot. It can either be a saved index file (e.g., `index.pkl`) or a directory of documents (e.g., `./index`). In the case of the directory the index will be automatically created. If no index is specified `bot` will look for `index.pkl` or `./index` | -| model | The name of the model to use for the bot. You can specify a different model by providing its name, like "text-davinci-003". Supported models: `gpt-3.5-turbo` (default),`text-davinci-003` More comming soon. | -| prompt | A string template for the prompt, which defines the format of the question and context passed to the model. The template should include placeholder variables like so: `context`, `{question}` and in the case of using memory `history`. | -| memory | The type of memory to be used by the bot. Can be a string with the type of the memory or you can use `memory` factory function. Supported memories: `conversation-buffer`, `conversation-buffer-window` | -| vectorstore | The vectorstore to be used for the index. Can be a string with the name of the databse or you can use `vectorstore` factory function. Supported DBs: `milvus`. | - -| sources | When `sources` is `True` the bot will also include sources in the response. A known [issue](https://github.com/hwchase17/langchain/issues/2858) exists, where if you pass a custom prompt with sources the code breaks. | - -## How QnA bot works - -Large language models (LLMs) are powerful, but they can't answer questions about documents they haven't seen. If you want to use an LLM to answer questions about documents it was not trained on, you have to give it information about those documents. To solve this, we use "retrieval augmented generation." - -In simple terms, when you have a question, you first search for relevant documents. Then, you give the documents and the question to the language model to generate an answer. To make this work, you need your documents in a searchable format (an index). This process involves two main steps: (1) preparing your documents for easy querying, and (2) using the retrieval augmented generation method. - -`qna-over-docs` uses FAISS to create an index of documents and GPT to generate answers. - -```mermaid -sequenceDiagram - actor User - participant API - participant LLM - participant Vectorstore - participant IngestionEngine - participant DataLake - autonumber - - Note over API, DataLake: Ingestion phase - loop Every X time - IngestionEngine ->> DataLake: Load documents - DataLake -->> IngestionEngine: Return data - IngestionEngine -->> IngestionEngine: Split documents and Create embeddings - IngestionEngine ->> Vectorstore: Store documents and embeddings - end - - Note over API, DataLake: Generation phase - - User ->> API: Receive user question - API ->> Vectorstore: Lookup documents in the index relevant to the question - API ->> API: Construct a prompt from the question and any relevant documents - API ->> LLM: Pass the prompt to the model - LLM -->> API: Get response from model - API -->> User: Return response - -``` - -## How to contribute? - -We welcome any suggestions, problem reports, and contributions! -For any changes you would like to make to this project, we invite you to submit an [issue](https://github.com/momegas/megabots/issues). - -For more information, see [`CONTRIBUTING`](https://github.com/momegas/megabots/blob/main/CONTRIBUTING.md) instructions. diff --git a/spaces/mshkdm/VToonify/vtoonify/model/stylegan/lpips/pretrained_networks.py b/spaces/mshkdm/VToonify/vtoonify/model/stylegan/lpips/pretrained_networks.py deleted file mode 100644 index 077a24419364fdb5ae2f697f73e28615adae75a7..0000000000000000000000000000000000000000 --- a/spaces/mshkdm/VToonify/vtoonify/model/stylegan/lpips/pretrained_networks.py +++ /dev/null @@ -1,181 +0,0 @@ -from collections import namedtuple -import torch -from torchvision import models as tv -from IPython import embed - -class squeezenet(torch.nn.Module): - def __init__(self, requires_grad=False, pretrained=True): - super(squeezenet, self).__init__() - pretrained_features = tv.squeezenet1_1(pretrained=pretrained).features - self.slice1 = torch.nn.Sequential() - self.slice2 = torch.nn.Sequential() - self.slice3 = torch.nn.Sequential() - self.slice4 = torch.nn.Sequential() - self.slice5 = torch.nn.Sequential() - self.slice6 = torch.nn.Sequential() - self.slice7 = torch.nn.Sequential() - self.N_slices = 7 - for x in range(2): - self.slice1.add_module(str(x), pretrained_features[x]) - for x in range(2,5): - self.slice2.add_module(str(x), pretrained_features[x]) - for x in range(5, 8): - self.slice3.add_module(str(x), pretrained_features[x]) - for x in range(8, 10): - self.slice4.add_module(str(x), pretrained_features[x]) - for x in range(10, 11): - self.slice5.add_module(str(x), pretrained_features[x]) - for x in range(11, 12): - self.slice6.add_module(str(x), pretrained_features[x]) - for x in range(12, 13): - self.slice7.add_module(str(x), pretrained_features[x]) - if not requires_grad: - for param in self.parameters(): - param.requires_grad = False - - def forward(self, X): - h = self.slice1(X) - h_relu1 = h - h = self.slice2(h) - h_relu2 = h - h = self.slice3(h) - h_relu3 = h - h = self.slice4(h) - h_relu4 = h - h = self.slice5(h) - h_relu5 = h - h = self.slice6(h) - h_relu6 = h - h = self.slice7(h) - h_relu7 = h - vgg_outputs = namedtuple("SqueezeOutputs", ['relu1','relu2','relu3','relu4','relu5','relu6','relu7']) - out = vgg_outputs(h_relu1,h_relu2,h_relu3,h_relu4,h_relu5,h_relu6,h_relu7) - - return out - - -class alexnet(torch.nn.Module): - def __init__(self, requires_grad=False, pretrained=True): - super(alexnet, self).__init__() - alexnet_pretrained_features = tv.alexnet(pretrained=pretrained).features - self.slice1 = torch.nn.Sequential() - self.slice2 = torch.nn.Sequential() - self.slice3 = torch.nn.Sequential() - self.slice4 = torch.nn.Sequential() - self.slice5 = torch.nn.Sequential() - self.N_slices = 5 - for x in range(2): - self.slice1.add_module(str(x), alexnet_pretrained_features[x]) - for x in range(2, 5): - self.slice2.add_module(str(x), alexnet_pretrained_features[x]) - for x in range(5, 8): - self.slice3.add_module(str(x), alexnet_pretrained_features[x]) - for x in range(8, 10): - self.slice4.add_module(str(x), alexnet_pretrained_features[x]) - for x in range(10, 12): - self.slice5.add_module(str(x), alexnet_pretrained_features[x]) - if not requires_grad: - for param in self.parameters(): - param.requires_grad = False - - def forward(self, X): - h = self.slice1(X) - h_relu1 = h - h = self.slice2(h) - h_relu2 = h - h = self.slice3(h) - h_relu3 = h - h = self.slice4(h) - h_relu4 = h - h = self.slice5(h) - h_relu5 = h - alexnet_outputs = namedtuple("AlexnetOutputs", ['relu1', 'relu2', 'relu3', 'relu4', 'relu5']) - out = alexnet_outputs(h_relu1, h_relu2, h_relu3, h_relu4, h_relu5) - - return out - -class vgg16(torch.nn.Module): - def __init__(self, requires_grad=False, pretrained=True): - super(vgg16, self).__init__() - vgg_pretrained_features = tv.vgg16(pretrained=pretrained).features - self.slice1 = torch.nn.Sequential() - self.slice2 = torch.nn.Sequential() - self.slice3 = torch.nn.Sequential() - self.slice4 = torch.nn.Sequential() - self.slice5 = torch.nn.Sequential() - self.N_slices = 5 - for x in range(4): - self.slice1.add_module(str(x), vgg_pretrained_features[x]) - for x in range(4, 9): - self.slice2.add_module(str(x), vgg_pretrained_features[x]) - for x in range(9, 16): - self.slice3.add_module(str(x), vgg_pretrained_features[x]) - for x in range(16, 23): - self.slice4.add_module(str(x), vgg_pretrained_features[x]) - for x in range(23, 30): - self.slice5.add_module(str(x), vgg_pretrained_features[x]) - if not requires_grad: - for param in self.parameters(): - param.requires_grad = False - - def forward(self, X): - h = self.slice1(X) - h_relu1_2 = h - h = self.slice2(h) - h_relu2_2 = h - h = self.slice3(h) - h_relu3_3 = h - h = self.slice4(h) - h_relu4_3 = h - h = self.slice5(h) - h_relu5_3 = h - vgg_outputs = namedtuple("VggOutputs", ['relu1_2', 'relu2_2', 'relu3_3', 'relu4_3', 'relu5_3']) - out = vgg_outputs(h_relu1_2, h_relu2_2, h_relu3_3, h_relu4_3, h_relu5_3) - - return out - - - -class resnet(torch.nn.Module): - def __init__(self, requires_grad=False, pretrained=True, num=18): - super(resnet, self).__init__() - if(num==18): - self.net = tv.resnet18(pretrained=pretrained) - elif(num==34): - self.net = tv.resnet34(pretrained=pretrained) - elif(num==50): - self.net = tv.resnet50(pretrained=pretrained) - elif(num==101): - self.net = tv.resnet101(pretrained=pretrained) - elif(num==152): - self.net = tv.resnet152(pretrained=pretrained) - self.N_slices = 5 - - self.conv1 = self.net.conv1 - self.bn1 = self.net.bn1 - self.relu = self.net.relu - self.maxpool = self.net.maxpool - self.layer1 = self.net.layer1 - self.layer2 = self.net.layer2 - self.layer3 = self.net.layer3 - self.layer4 = self.net.layer4 - - def forward(self, X): - h = self.conv1(X) - h = self.bn1(h) - h = self.relu(h) - h_relu1 = h - h = self.maxpool(h) - h = self.layer1(h) - h_conv2 = h - h = self.layer2(h) - h_conv3 = h - h = self.layer3(h) - h_conv4 = h - h = self.layer4(h) - h_conv5 = h - - outputs = namedtuple("Outputs", ['relu1','conv2','conv3','conv4','conv5']) - out = outputs(h_relu1, h_conv2, h_conv3, h_conv4, h_conv5) - - return out diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/linformer/linformer_src/__init__.py b/spaces/mshukor/UnIVAL/fairseq/examples/linformer/linformer_src/__init__.py deleted file mode 100644 index 1c52f135ea6f99d0effe8ce1f7d77cbd66be3745..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/linformer/linformer_src/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .models import linformer_roberta # noqa diff --git a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/ratatouille/scaling_best/caption/caption_stage_1_ofaplus_base_pretrain_s2_hsep1_bs16_shuf_initrefcocoplus.sh b/spaces/mshukor/UnIVAL/slurm_adastra/averaging/ratatouille/scaling_best/caption/caption_stage_1_ofaplus_base_pretrain_s2_hsep1_bs16_shuf_initrefcocoplus.sh deleted file mode 100644 index 1f42492ba7e12735c8743756c564f25f56052592..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/ratatouille/scaling_best/caption/caption_stage_1_ofaplus_base_pretrain_s2_hsep1_bs16_shuf_initrefcocoplus.sh +++ /dev/null @@ -1,30 +0,0 @@ -#!/bin/bash - -#SBATCH --job-name=caption_stage_1_ofaplus_base_pretrain_s2_hsep1_bs16_shuf_initrefcocoplus -#SBATCH --nodes=1 -#SBATCH --ntasks=1 -#SBATCH --gpus=8 -#SBATCH --threads-per-core=2 -#SBATCH --gpu-bind=closest -####SBATCH --nodelist=x1004c4s2b0n0 -#SBATCH --time=24:00:00 -#SBATCH -C MI250 -#SBATCH -A gda2204 -#SBATCH --mail-type=END,FAIL -#SBATCH --output=/lus/home/NAT/gda2204/mshukor/logs/slurm/caption_stage_1_ofaplus_base_pretrain_s2_hsep1_bs16_shuf_initrefcocoplus.out -#SBATCH --exclusive -#SBATCH --mail-user=mustafa.shukor@isir.upmc.fr - - -cd /lus/home/NAT/gda2204/mshukor/code/ofa_ours/run_scripts -source /lus/home/NAT/gda2204/mshukor/.bashrc - -conda activate main - - -rm core-python3* - - -srun -l -N 1 -n 1 -c 128 --gpus=8 bash averaging/ratatouille/scaling_best/caption/caption_stage_1_ofaplus_base_pretrain_s2_hsep1_bs16_shuf_initrefcocoplus.sh - - diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/masks/mask.py b/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/masks/mask.py deleted file mode 100644 index 3e34d0675a781fba983cb542f18390255aaf2609..0000000000000000000000000000000000000000 --- a/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/masks/mask.py +++ /dev/null @@ -1,429 +0,0 @@ -import enum -from copy import deepcopy - -import numpy as np -from skimage import img_as_ubyte -from skimage.transform import rescale, resize -try: - from detectron2 import model_zoo - from detectron2.config import get_cfg - from detectron2.engine import DefaultPredictor - DETECTRON_INSTALLED = True -except: - print("Detectron v2 is not installed") - DETECTRON_INSTALLED = False - -from .countless.countless2d import zero_corrected_countless - - -class ObjectMask(): - def __init__(self, mask): - self.height, self.width = mask.shape - (self.up, self.down), (self.left, self.right) = self._get_limits(mask) - self.mask = mask[self.up:self.down, self.left:self.right].copy() - - @staticmethod - def _get_limits(mask): - def indicator_limits(indicator): - lower = indicator.argmax() - upper = len(indicator) - indicator[::-1].argmax() - return lower, upper - - vertical_indicator = mask.any(axis=1) - vertical_limits = indicator_limits(vertical_indicator) - - horizontal_indicator = mask.any(axis=0) - horizontal_limits = indicator_limits(horizontal_indicator) - - return vertical_limits, horizontal_limits - - def _clean(self): - self.up, self.down, self.left, self.right = 0, 0, 0, 0 - self.mask = np.empty((0, 0)) - - def horizontal_flip(self, inplace=False): - if not inplace: - flipped = deepcopy(self) - return flipped.horizontal_flip(inplace=True) - - self.mask = self.mask[:, ::-1] - return self - - def vertical_flip(self, inplace=False): - if not inplace: - flipped = deepcopy(self) - return flipped.vertical_flip(inplace=True) - - self.mask = self.mask[::-1, :] - return self - - def image_center(self): - y_center = self.up + (self.down - self.up) / 2 - x_center = self.left + (self.right - self.left) / 2 - return y_center, x_center - - def rescale(self, scaling_factor, inplace=False): - if not inplace: - scaled = deepcopy(self) - return scaled.rescale(scaling_factor, inplace=True) - - scaled_mask = rescale(self.mask.astype(float), scaling_factor, order=0) > 0.5 - (up, down), (left, right) = self._get_limits(scaled_mask) - self.mask = scaled_mask[up:down, left:right] - - y_center, x_center = self.image_center() - mask_height, mask_width = self.mask.shape - self.up = int(round(y_center - mask_height / 2)) - self.down = self.up + mask_height - self.left = int(round(x_center - mask_width / 2)) - self.right = self.left + mask_width - return self - - def crop_to_canvas(self, vertical=True, horizontal=True, inplace=False): - if not inplace: - cropped = deepcopy(self) - cropped.crop_to_canvas(vertical=vertical, horizontal=horizontal, inplace=True) - return cropped - - if vertical: - if self.up >= self.height or self.down <= 0: - self._clean() - else: - cut_up, cut_down = max(-self.up, 0), max(self.down - self.height, 0) - if cut_up != 0: - self.mask = self.mask[cut_up:] - self.up = 0 - if cut_down != 0: - self.mask = self.mask[:-cut_down] - self.down = self.height - - if horizontal: - if self.left >= self.width or self.right <= 0: - self._clean() - else: - cut_left, cut_right = max(-self.left, 0), max(self.right - self.width, 0) - if cut_left != 0: - self.mask = self.mask[:, cut_left:] - self.left = 0 - if cut_right != 0: - self.mask = self.mask[:, :-cut_right] - self.right = self.width - - return self - - def restore_full_mask(self, allow_crop=False): - cropped = self.crop_to_canvas(inplace=allow_crop) - mask = np.zeros((cropped.height, cropped.width), dtype=bool) - mask[cropped.up:cropped.down, cropped.left:cropped.right] = cropped.mask - return mask - - def shift(self, vertical=0, horizontal=0, inplace=False): - if not inplace: - shifted = deepcopy(self) - return shifted.shift(vertical=vertical, horizontal=horizontal, inplace=True) - - self.up += vertical - self.down += vertical - self.left += horizontal - self.right += horizontal - return self - - def area(self): - return self.mask.sum() - - -class RigidnessMode(enum.Enum): - soft = 0 - rigid = 1 - - -class SegmentationMask: - def __init__(self, confidence_threshold=0.5, rigidness_mode=RigidnessMode.rigid, - max_object_area=0.3, min_mask_area=0.02, downsample_levels=6, num_variants_per_mask=4, - max_mask_intersection=0.5, max_foreground_coverage=0.5, max_foreground_intersection=0.5, - max_hidden_area=0.2, max_scale_change=0.25, horizontal_flip=True, - max_vertical_shift=0.1, position_shuffle=True): - """ - :param confidence_threshold: float; threshold for confidence of the panoptic segmentator to allow for - the instance. - :param rigidness_mode: RigidnessMode object - when soft, checks intersection only with the object from which the mask_object was produced - when rigid, checks intersection with any foreground class object - :param max_object_area: float; allowed upper bound for to be considered as mask_object. - :param min_mask_area: float; lower bound for mask to be considered valid - :param downsample_levels: int; defines width of the resized segmentation to obtain shifted masks; - :param num_variants_per_mask: int; maximal number of the masks for the same object; - :param max_mask_intersection: float; maximum allowed area fraction of intersection for 2 masks - produced by horizontal shift of the same mask_object; higher value -> more diversity - :param max_foreground_coverage: float; maximum allowed area fraction of intersection for foreground object to be - covered by mask; lower value -> less the objects are covered - :param max_foreground_intersection: float; maximum allowed area of intersection for the mask with foreground - object; lower value -> mask is more on the background than on the objects - :param max_hidden_area: upper bound on part of the object hidden by shifting object outside the screen area; - :param max_scale_change: allowed scale change for the mask_object; - :param horizontal_flip: if horizontal flips are allowed; - :param max_vertical_shift: amount of vertical movement allowed; - :param position_shuffle: shuffle - """ - - assert DETECTRON_INSTALLED, 'Cannot use SegmentationMask without detectron2' - self.cfg = get_cfg() - self.cfg.merge_from_file(model_zoo.get_config_file("COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml")) - self.cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml") - self.cfg.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = confidence_threshold - self.predictor = DefaultPredictor(self.cfg) - - self.rigidness_mode = RigidnessMode(rigidness_mode) - self.max_object_area = max_object_area - self.min_mask_area = min_mask_area - self.downsample_levels = downsample_levels - self.num_variants_per_mask = num_variants_per_mask - self.max_mask_intersection = max_mask_intersection - self.max_foreground_coverage = max_foreground_coverage - self.max_foreground_intersection = max_foreground_intersection - self.max_hidden_area = max_hidden_area - self.position_shuffle = position_shuffle - - self.max_scale_change = max_scale_change - self.horizontal_flip = horizontal_flip - self.max_vertical_shift = max_vertical_shift - - def get_segmentation(self, img): - im = img_as_ubyte(img) - panoptic_seg, segment_info = self.predictor(im)["panoptic_seg"] - return panoptic_seg, segment_info - - @staticmethod - def _is_power_of_two(n): - return (n != 0) and (n & (n-1) == 0) - - def identify_candidates(self, panoptic_seg, segments_info): - potential_mask_ids = [] - for segment in segments_info: - if not segment["isthing"]: - continue - mask = (panoptic_seg == segment["id"]).int().detach().cpu().numpy() - area = mask.sum().item() / np.prod(panoptic_seg.shape) - if area >= self.max_object_area: - continue - potential_mask_ids.append(segment["id"]) - return potential_mask_ids - - def downsample_mask(self, mask): - height, width = mask.shape - if not (self._is_power_of_two(height) and self._is_power_of_two(width)): - raise ValueError("Image sides are not power of 2.") - - num_iterations = width.bit_length() - 1 - self.downsample_levels - if num_iterations < 0: - raise ValueError(f"Width is lower than 2^{self.downsample_levels}.") - - if height.bit_length() - 1 < num_iterations: - raise ValueError("Height is too low to perform downsampling") - - downsampled = mask - for _ in range(num_iterations): - downsampled = zero_corrected_countless(downsampled) - - return downsampled - - def _augmentation_params(self): - scaling_factor = np.random.uniform(1 - self.max_scale_change, 1 + self.max_scale_change) - if self.horizontal_flip: - horizontal_flip = bool(np.random.choice(2)) - else: - horizontal_flip = False - vertical_shift = np.random.uniform(-self.max_vertical_shift, self.max_vertical_shift) - - return { - "scaling_factor": scaling_factor, - "horizontal_flip": horizontal_flip, - "vertical_shift": vertical_shift - } - - def _get_intersection(self, mask_array, mask_object): - intersection = mask_array[ - mask_object.up:mask_object.down, mask_object.left:mask_object.right - ] & mask_object.mask - return intersection - - def _check_masks_intersection(self, aug_mask, total_mask_area, prev_masks): - for existing_mask in prev_masks: - intersection_area = self._get_intersection(existing_mask, aug_mask).sum() - intersection_existing = intersection_area / existing_mask.sum() - intersection_current = 1 - (aug_mask.area() - intersection_area) / total_mask_area - if (intersection_existing > self.max_mask_intersection) or \ - (intersection_current > self.max_mask_intersection): - return False - return True - - def _check_foreground_intersection(self, aug_mask, foreground): - for existing_mask in foreground: - intersection_area = self._get_intersection(existing_mask, aug_mask).sum() - intersection_existing = intersection_area / existing_mask.sum() - if intersection_existing > self.max_foreground_coverage: - return False - intersection_mask = intersection_area / aug_mask.area() - if intersection_mask > self.max_foreground_intersection: - return False - return True - - def _move_mask(self, mask, foreground): - # Obtaining properties of the original mask_object: - orig_mask = ObjectMask(mask) - - chosen_masks = [] - chosen_parameters = [] - # to fix the case when resizing gives mask_object consisting only of False - scaling_factor_lower_bound = 0. - - for var_idx in range(self.num_variants_per_mask): - # Obtaining augmentation parameters and applying them to the downscaled mask_object - augmentation_params = self._augmentation_params() - augmentation_params["scaling_factor"] = min([ - augmentation_params["scaling_factor"], - 2 * min(orig_mask.up, orig_mask.height - orig_mask.down) / orig_mask.height + 1., - 2 * min(orig_mask.left, orig_mask.width - orig_mask.right) / orig_mask.width + 1. - ]) - augmentation_params["scaling_factor"] = max([ - augmentation_params["scaling_factor"], scaling_factor_lower_bound - ]) - - aug_mask = deepcopy(orig_mask) - aug_mask.rescale(augmentation_params["scaling_factor"], inplace=True) - if augmentation_params["horizontal_flip"]: - aug_mask.horizontal_flip(inplace=True) - total_aug_area = aug_mask.area() - if total_aug_area == 0: - scaling_factor_lower_bound = 1. - continue - - # Fix if the element vertical shift is too strong and shown area is too small: - vertical_area = aug_mask.mask.sum(axis=1) / total_aug_area # share of area taken by rows - # number of rows which are allowed to be hidden from upper and lower parts of image respectively - max_hidden_up = np.searchsorted(vertical_area.cumsum(), self.max_hidden_area) - max_hidden_down = np.searchsorted(vertical_area[::-1].cumsum(), self.max_hidden_area) - # correcting vertical shift, so not too much area will be hidden - augmentation_params["vertical_shift"] = np.clip( - augmentation_params["vertical_shift"], - -(aug_mask.up + max_hidden_up) / aug_mask.height, - (aug_mask.height - aug_mask.down + max_hidden_down) / aug_mask.height - ) - # Applying vertical shift: - vertical_shift = int(round(aug_mask.height * augmentation_params["vertical_shift"])) - aug_mask.shift(vertical=vertical_shift, inplace=True) - aug_mask.crop_to_canvas(vertical=True, horizontal=False, inplace=True) - - # Choosing horizontal shift: - max_hidden_area = self.max_hidden_area - (1 - aug_mask.area() / total_aug_area) - horizontal_area = aug_mask.mask.sum(axis=0) / total_aug_area - max_hidden_left = np.searchsorted(horizontal_area.cumsum(), max_hidden_area) - max_hidden_right = np.searchsorted(horizontal_area[::-1].cumsum(), max_hidden_area) - allowed_shifts = np.arange(-max_hidden_left, aug_mask.width - - (aug_mask.right - aug_mask.left) + max_hidden_right + 1) - allowed_shifts = - (aug_mask.left - allowed_shifts) - - if self.position_shuffle: - np.random.shuffle(allowed_shifts) - - mask_is_found = False - for horizontal_shift in allowed_shifts: - aug_mask_left = deepcopy(aug_mask) - aug_mask_left.shift(horizontal=horizontal_shift, inplace=True) - aug_mask_left.crop_to_canvas(inplace=True) - - prev_masks = [mask] + chosen_masks - is_mask_suitable = self._check_masks_intersection(aug_mask_left, total_aug_area, prev_masks) & \ - self._check_foreground_intersection(aug_mask_left, foreground) - if is_mask_suitable: - aug_draw = aug_mask_left.restore_full_mask() - chosen_masks.append(aug_draw) - augmentation_params["horizontal_shift"] = horizontal_shift / aug_mask_left.width - chosen_parameters.append(augmentation_params) - mask_is_found = True - break - - if not mask_is_found: - break - - return chosen_parameters - - def _prepare_mask(self, mask): - height, width = mask.shape - target_width = width if self._is_power_of_two(width) else (1 << width.bit_length()) - target_height = height if self._is_power_of_two(height) else (1 << height.bit_length()) - - return resize(mask.astype('float32'), (target_height, target_width), order=0, mode='edge').round().astype('int32') - - def get_masks(self, im, return_panoptic=False): - panoptic_seg, segments_info = self.get_segmentation(im) - potential_mask_ids = self.identify_candidates(panoptic_seg, segments_info) - - panoptic_seg_scaled = self._prepare_mask(panoptic_seg.detach().cpu().numpy()) - downsampled = self.downsample_mask(panoptic_seg_scaled) - scene_objects = [] - for segment in segments_info: - if not segment["isthing"]: - continue - mask = downsampled == segment["id"] - if not np.any(mask): - continue - scene_objects.append(mask) - - mask_set = [] - for mask_id in potential_mask_ids: - mask = downsampled == mask_id - if not np.any(mask): - continue - - if self.rigidness_mode is RigidnessMode.soft: - foreground = [mask] - elif self.rigidness_mode is RigidnessMode.rigid: - foreground = scene_objects - else: - raise ValueError(f'Unexpected rigidness_mode: {rigidness_mode}') - - masks_params = self._move_mask(mask, foreground) - - full_mask = ObjectMask((panoptic_seg == mask_id).detach().cpu().numpy()) - - for params in masks_params: - aug_mask = deepcopy(full_mask) - aug_mask.rescale(params["scaling_factor"], inplace=True) - if params["horizontal_flip"]: - aug_mask.horizontal_flip(inplace=True) - - vertical_shift = int(round(aug_mask.height * params["vertical_shift"])) - horizontal_shift = int(round(aug_mask.width * params["horizontal_shift"])) - aug_mask.shift(vertical=vertical_shift, horizontal=horizontal_shift, inplace=True) - aug_mask = aug_mask.restore_full_mask().astype('uint8') - if aug_mask.mean() <= self.min_mask_area: - continue - mask_set.append(aug_mask) - - if return_panoptic: - return mask_set, panoptic_seg.detach().cpu().numpy() - else: - return mask_set - - -def propose_random_square_crop(mask, min_overlap=0.5): - height, width = mask.shape - mask_ys, mask_xs = np.where(mask > 0.5) # mask==0 is known fragment and mask==1 is missing - - if height < width: - crop_size = height - obj_left, obj_right = mask_xs.min(), mask_xs.max() - obj_width = obj_right - obj_left - left_border = max(0, min(width - crop_size - 1, obj_left + obj_width * min_overlap - crop_size)) - right_border = max(left_border + 1, min(width - crop_size, obj_left + obj_width * min_overlap)) - start_x = np.random.randint(left_border, right_border) - return start_x, 0, start_x + crop_size, height - else: - crop_size = width - obj_top, obj_bottom = mask_ys.min(), mask_ys.max() - obj_height = obj_bottom - obj_top - top_border = max(0, min(height - crop_size - 1, obj_top + obj_height * min_overlap - crop_size)) - bottom_border = max(top_border + 1, min(height - crop_size, obj_top + obj_height * min_overlap)) - start_y = np.random.randint(top_border, bottom_border) - return 0, start_y, width, start_y + crop_size diff --git a/spaces/nanomenta/sketch_frame_interpolation/README.md b/spaces/nanomenta/sketch_frame_interpolation/README.md deleted file mode 100644 index e84b0a85fb4c03fa520d6e04bbda6fe44809af3b..0000000000000000000000000000000000000000 --- a/spaces/nanomenta/sketch_frame_interpolation/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Sketch Frame Interpolation -emoji: 🐠🐠 -colorFrom: blue -colorTo: gray -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nateraw/lavila/eval_narrator.py b/spaces/nateraw/lavila/eval_narrator.py deleted file mode 100644 index 25d7d86eda334fbe8b4e9084acad47f7ceb2d2ae..0000000000000000000000000000000000000000 --- a/spaces/nateraw/lavila/eval_narrator.py +++ /dev/null @@ -1,308 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import os.path as osp -import time -from collections import OrderedDict - -import numpy as np -# https://github.com/numpy/numpy/issues/21079 -try: - import numpy.distutils - numpy.distutils.__config__.blas_opt_info = np.distutils.__config__.blas_ilp64_opt_info -except Exception: - pass -from nlgeval import NLGEval - -import torch -import torchvision.transforms as transforms -import torchvision.transforms._transforms_video as transforms_video - -from lavila.data import datasets -from lavila.data.video_transforms import Permute, SpatialCrop, TemporalCrop -from lavila.models import models -from lavila.models.utils import inflate_positional_embeds -from lavila.utils import distributed as dist_utils -from lavila.utils.preprocess import generate_tokenizer - - -def decode_one(generated_ids, tokenizer): - # get the index of - if tokenizer.eos_token_id == tokenizer.bos_token_id: - if tokenizer.eos_token_id in generated_ids[1:].tolist(): - eos_id = generated_ids[1:].tolist().index(tokenizer.eos_token_id) + 1 - else: - eos_id = len(generated_ids.tolist()) - 1 - elif tokenizer.eos_token_id in generated_ids.tolist(): - eos_id = generated_ids.tolist().index(tokenizer.eos_token_id) - else: - eos_id = len(generated_ids.tolist()) - 1 - generated_text_str = tokenizer.tokenizer.decode(generated_ids[1:eos_id].tolist()) - return generated_text_str - - -def get_args_parser(): - parser = argparse.ArgumentParser(description='LAVILA 0-shot evaluations', add_help=False) - parser.add_argument('--dataset', default='ego4d', type=str, - choices=['ego4d']) - parser.add_argument('--root', - default='datasets/Ego4D/video_5min_chunks_288px/', - type=str, help='path to dataset root') - parser.add_argument('--metadata-val', - default='datasets/Ego4D/ego4d_val.pkl', - type=str, help='path to metadata file (val set)') - parser.add_argument('--output-dir', default='./', type=str, help='output dir') - parser.add_argument('--num-crops', default=1, type=int, help='number of crops in transforms') - parser.add_argument('--num-clips', default=1, type=int, help='number of clips (for untrimmed videos, eg. Charades)') - parser.add_argument('--clip-length', default=4, type=int, help='clip length') - parser.add_argument('--clip-stride', default=16, type=int, help='clip stride') - parser.add_argument('--sparse-sample', action='store_true', help='switch to sparse sampling') - parser.add_argument('--batch-size', default=16, type=int, help='batch_size') - # captioning options - parser.add_argument('--caption-sample', default='multinomial_sample', - choices=['multinomial_sample', 'beam_sample', 'group_beam_search']) - parser.add_argument('--caption-top-k', default=None, type=int, help='top-k sampling (predecessor of nucleus sampling)') - parser.add_argument('--caption-top-p', default=0.95, type=float, help='top-p sampling sampling (aka nucleus sampling)') - parser.add_argument('--caption-num-beams', default=3, type=int) - parser.add_argument('--caption-num-beam-groups', default=1, type=int) - parser.add_argument('--caption-temperature', default=0.7, type=float) - parser.add_argument('--caption-length-penalty', default=1.0, type=float) - parser.add_argument('--caption-num-return-sequences', default=1, type=int) - parser.add_argument('--caption-max-len', default=77, type=int) - parser.add_argument('--caption-disable-visual', action='store_true') - parser.add_argument('--caption-early-stop', action='store_true', help='early stopping to save computation') - parser.add_argument('--caption-output-filename', default='caption.txt', type=str) - # others - parser.add_argument('--eval-freq', default=1000, type=int, - help='percentage (1/eval_freq) of val data to evaluate (for fast prototyping)') - parser.add_argument('--print-freq', default=10, type=int) - parser.add_argument('-j', '--workers', default=10, type=int, metavar='N', - help='number of data loading workers per process') - parser.add_argument('--resume', default='', type=str, help='path to latest checkpoint') - parser.add_argument('--use-half', action='store_true') - return parser - - -def main(args): - if args.resume: - ckpt_path = args.resume - elif osp.isfile(osp.join(args.output_dir, 'checkpoint_best.pt')): - ckpt_path = osp.join(args.output_dir, 'checkpoint_best.pt') - else: - raise Exception('no checkpoint found') - - ckpt = torch.load(ckpt_path, map_location='cpu') - - # create model - state_dict = OrderedDict() - for k, v in ckpt['state_dict'].items(): - state_dict[k.replace('module.', '')] = v - - old_args = ckpt['args'] - print('=> creating model: {}'.format(old_args.model)) - model = getattr(models, old_args.model)( - text_use_cls_token=old_args.use_cls_token, - project_embed_dim=old_args.project_embed_dim, - gated_xattn=False if 'gated_xattn' not in old_args else old_args.gated_xattn, - timesformer_gated_xattn=False if 'timesformer_gated_xattn' not in old_args else old_args.timesformer_gated_xattn, - timesformer_freeze_space=False if 'timesformer_freeze_space' not in old_args else old_args.timesformer_freeze_space, - freeze_lm_vclm=False if 'freeze_lm_vclm' not in old_args else old_args.freeze_lm_vclm, - freeze_visual_vclm=False if 'freeze_visual_vclm' not in old_args else old_args.freeze_visual_vclm, - num_frames=args.clip_length, - drop_path_rate=0, - ) - model.cuda() - if 'TIMESFORMER' in old_args.model or 'EGOVLP' in old_args.model: - # inflate weight - print('=> inflating PE in models due to different frame numbers') - state_dict = inflate_positional_embeds( - model.state_dict(), state_dict, - num_frames=args.clip_length, - load_temporal_fix='bilinear', - ) - model.load_state_dict(state_dict, strict=True) - print("=> loaded resume checkpoint '{}' (epoch {}, best_metric = {})".format(args.resume, ckpt['epoch'], ckpt['best_acc1'])) - - torch.backends.cudnn.benchmark = True - - tokenizer = generate_tokenizer(old_args.model) - crop_size = 224 if '336PX' not in old_args.model else 336 - if args.num_crops == 1 and args.num_clips == 1: - val_transform = transforms.Compose([ - Permute([3, 0, 1, 2]), # T H W C -> C T H W - transforms.Resize(crop_size), - transforms.CenterCrop(crop_size), - (transforms_video.NormalizeVideo(mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375]) if ('OPENAI' not in old_args.model) else - transforms_video.NormalizeVideo(mean=[108.3272985, 116.7460125, 104.09373615000001], std=[68.5005327, 66.6321579, 70.32316305])), - ]) - else: - val_transform = transforms.Compose([ - Permute([3, 0, 1, 2]), # T H W C -> C T H W - transforms.Resize(crop_size), - (transforms_video.NormalizeVideo(mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375]) if ('OPENAI' not in old_args.model) else - transforms_video.NormalizeVideo(mean=[108.3272985, 116.7460125, 104.09373615000001], std=[68.5005327, 66.6321579, 70.32316305])), - TemporalCrop(frames_per_clip=args.clip_length, stride=args.clip_length), - SpatialCrop(crop_size=crop_size, num_crops=args.num_crops), - ]) - - val_dataset = datasets.VideoCaptionDatasetCLIP( - args.dataset, - args.root, - args.metadata_val, - transform=val_transform, - is_training=False, - tokenizer=tokenizer, - clip_length=args.clip_length, - clip_stride=args.clip_stride, - sparse_sample=False, - subsample_stride=args.eval_freq, - ) - - val_loader = torch.utils.data.DataLoader( - val_dataset, batch_size=args.batch_size, shuffle=False, - num_workers=args.workers, pin_memory=True, drop_last=False) - - validate_caption(val_loader, model, tokenizer, args.caption_output_filename, use_half=args.use_half) - - -def validate_caption(val_loader, model, tokenizer, output_filename='caption.txt', use_half=False): - model.eval() - if args.use_half: - model = model.half() - nlgeval = NLGEval() - f = open(output_filename, 'w') - ppls_all = [] - ppls_with_teacher_all = [] - reference = [] - hypothesis = [] - end_time = time.time() - id_offset = 0 - print('=> start forwarding') - with torch.no_grad(): - for i, inputs in enumerate(val_loader): - if i % args.print_freq == 0: - print('finish batch {}/{} in {} sec'.format(i, len(val_loader), time.time() - end_time)) - end_time = time.time() - images = inputs[0].cuda(non_blocking=True) - if use_half: - images = images.half() - target = inputs[1].cuda(non_blocking=True) - - # encode images - image_features = dist_utils.get_model(model).encode_image(images) - - # teacher forcing (to get standard ppl metric) - generated_text_ids_with_teacher, ppls_with_teacher = dist_utils.get_model(model).generate( - image_features, - tokenizer, - target=target, - max_text_length=args.caption_max_len, - top_k=args.caption_top_k, - top_p=args.caption_top_p, - teacher_forcing=True, - early_stopping=args.caption_early_stop, - ) - - if args.caption_sample == 'multinomial_sample': - assert args.caption_num_beam_groups == 1 - generated_text_ids, ppls = dist_utils.get_model(model).generate( - image_features, - tokenizer, - target=target.repeat_interleave(args.caption_num_return_sequences, dim=0), - max_text_length=args.caption_max_len, - top_k=args.caption_top_k, - top_p=args.caption_top_p, - num_return_sequences=args.caption_num_return_sequences, - temperature=args.caption_temperature, - early_stopping=args.caption_early_stop, - ) - elif args.caption_sample == 'beam_sample': - assert args.caption_num_beam_groups == 1 - generated_text_ids, ppls = dist_utils.get_model(model).beam_sample( - image_features, - tokenizer, - target=target, - max_text_length=args.caption_max_len, - top_k=args.caption_top_k, - top_p=args.caption_top_p, - temperature=args.caption_temperature, - length_penalty=args.caption_length_penalty, - num_beams=args.caption_num_beams, - num_return_sequences=args.caption_num_return_sequences, - early_stopping=args.caption_early_stop, - ) - elif args.caption_sample == 'group_beam_search': - assert args.caption_num_beam_groups > 1 and args.caption_num_beams % args.caption_num_beam_groups == 0 - generated_text_ids, ppls = dist_utils.get_model(model).group_beam_search( - image_features, - tokenizer, - target=target if not args.caption_no_gt else None, - max_text_length=args.caption_max_len, - top_k=args.caption_top_k, - top_p=args.caption_top_p, - temperature=args.caption_temperature, - length_penalty=args.caption_length_penalty, - num_beams=args.caption_num_beams, - num_beam_groups=args.caption_num_beam_groups, - num_return_sequences=args.caption_num_return_sequences, - early_stopping=args.caption_early_stop, - ) - else: - raise NotImplementedError - ppls_all.append(ppls.reshape(-1, args.caption_num_return_sequences).mean(1)) - ppls_with_teacher_all.append(ppls_with_teacher) - - for j in range(generated_text_ids.shape[0] // args.caption_num_return_sequences): - for k in range(args.caption_num_return_sequences): - jj = j * args.caption_num_return_sequences + k - - generated_text_str = decode_one(generated_text_ids[jj], tokenizer) - gt_text = decode_one(target[j], tokenizer) - generated_text_str_with_teacher = decode_one(generated_text_ids_with_teacher[j], tokenizer) - - from transformers import BertTokenizer - bert_tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') - gt_text = bert_tokenizer.decode(bert_tokenizer(gt_text)['input_ids'][1:-1]) - generated_text_str = bert_tokenizer.decode(bert_tokenizer(generated_text_str)['input_ids'][1:-1]) - generated_text_str_with_teacher = bert_tokenizer.decode(bert_tokenizer(generated_text_str_with_teacher)['input_ids'][1:-1]) - reference.append(gt_text) - hypothesis.append(generated_text_str) - s1 = '[{:6d}] Groundtruth | | {}'.format(id_offset + j, gt_text) - s2 = '[{:6d}] Generated | PPL : {:9.3f} | {}'.format(id_offset + j, ppls[jj], generated_text_str) - s3 = '[{:6d}] Generated (w/. teacher) | PPL : {:9.3f} | {}'.format(id_offset + j, ppls_with_teacher[j], generated_text_str_with_teacher) - for s in [s1, s2, s3]: - # if i % args.print_freq == 0: - # print(s) - f.write('{} \n'.format(s)) - id_offset += generated_text_ids.shape[0] // args.caption_num_return_sequences - - ppls_with_teacher_all = torch.cat(ppls_with_teacher_all, dim=0) - ppls_all = torch.cat(ppls_all, dim=0) - - print('PPL (w/. teacher) = {:9.3f}'.format(ppls_with_teacher_all.mean().item())) - print('PPL (w/o. teacher) = {:9.3f}'.format(ppls_all.mean().item())) - f.write('PPL (w/. teacher) = {:9.3f} \n'.format(ppls_with_teacher_all.mean().item())) - f.write('PPL (w/o. teacher) = {:9.3f} \n'.format(ppls_all.mean().item())) - - print('Avg length for reference: {:9.3f}'.format(sum(map(lambda sentence: len(sentence.split(' ')), reference)) / len(reference))) - print('Avg length for hypothesis: {:9.3f}'.format(sum(map(lambda sentence: len(sentence.split(' ')), hypothesis)) / len(hypothesis))) - f.write('Avg length for reference: {:9.3f} \n'.format(sum(map(lambda sentence: len(sentence.split(' ')), reference)) / len(reference))) - f.write('Avg length for hypothesis: {:9.3f} \n'.format(sum(map(lambda sentence: len(sentence.split(' ')), hypothesis)) / len(hypothesis))) - - print('=> Calling NLGEval') - f.write('=> Calling NLGEval\n') - metrics_dict = nlgeval.compute_metrics([reference], hypothesis) - for k in metrics_dict: - print('{:16s} = {:9.3f}'.format(k, metrics_dict[k])) - f.write('{:16s} = {:9.3f} \n'.format(k, metrics_dict[k])) - f.close() - - -if __name__ == '__main__': - parser = argparse.ArgumentParser('lavila 0-shot evaluations', parents=[get_args_parser()]) - args = parser.parse_args() - main(args) diff --git a/spaces/nateraw/lavila/run_with_submitit_finetune_retrieval.py b/spaces/nateraw/lavila/run_with_submitit_finetune_retrieval.py deleted file mode 100644 index 04f40b6d7b842b2d41ec64404ec33cd01ae01d0a..0000000000000000000000000000000000000000 --- a/spaces/nateraw/lavila/run_with_submitit_finetune_retrieval.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -""" -A script to run multinode training with submitit. -""" -import argparse -import os -import uuid -from pathlib import Path - -import main_finetune_retrieval as main_finetune -import submitit - - -def parse_args(): - parser = main_finetune.get_args_parser() - parser = argparse.ArgumentParser("Submitit for lavila fine-tuning", parents=[parser]) - parser.add_argument("--ngpus", default=8, type=int, help="Number of gpus to request on each node") - parser.add_argument("--nodes", default=8, type=int, help="Number of nodes to request") - parser.add_argument("--timeout", default=2880, type=int, help="Duration of the job") - parser.add_argument("--job_dir", default="", type=str, help="Job dir. Leave empty for automatic.") - - parser.add_argument("--partition", default="learnlab", type=str, help="Partition where to submit") - parser.add_argument("--use_volta32", action='store_true', help="Big models? Use this") - parser.add_argument('--comment', default="", type=str, - help='Comment to pass to scheduler, e.g. priority message') - return parser.parse_args() - - -def get_shared_folder() -> Path: - user = os.getenv("USER") - if Path("/checkpoint/").is_dir(): - p = Path(f"/checkpoint/{user}/experiments/lavila_ft") - p.mkdir(exist_ok=True) - return p - raise RuntimeError("No shared folder available") - - -def get_init_file(): - # Init file must not exist, but it's parent dir must exist. - os.makedirs(str(get_shared_folder()), exist_ok=True) - init_file = get_shared_folder() / f"{uuid.uuid4().hex}_init" - if init_file.exists(): - os.remove(str(init_file)) - return init_file - - -class Trainer(object): - def __init__(self, args): - self.args = args - - def __call__(self): - import main_finetune_retrieval as main_finetune - - self._setup_gpu_args() - main_finetune.main(self.args) - - def checkpoint(self): - import submitit - - self.args.dist_url = get_init_file().as_uri() - print("Requeuing ", self.args) - empty_trainer = type(self)(self.args) - return submitit.helpers.DelayedSubmission(empty_trainer) - - def _setup_gpu_args(self): - import submitit - from pathlib import Path - - job_env = submitit.JobEnvironment() - self.args.output_dir = Path(str(self.args.output_dir).replace("%j", str(job_env.job_id))) - self.args.gpu = job_env.local_rank - self.args.rank = job_env.global_rank - self.args.world_size = job_env.num_tasks - print(f"Process group: {job_env.num_tasks} tasks, rank: {job_env.global_rank}") - - -def main(): - args = parse_args() - if args.job_dir == "": - args.job_dir = get_shared_folder() / "%j" - - # Note that the folder will depend on the job_id, to easily track experiments - executor = submitit.AutoExecutor(folder=args.job_dir, slurm_max_num_timeout=30) - - num_gpus_per_node = args.ngpus - nodes = args.nodes - timeout_min = args.timeout - - partition = args.partition - kwargs = {} - if args.use_volta32: - kwargs['slurm_constraint'] = 'volta32gb' - if args.comment: - kwargs['slurm_comment'] = args.comment - - executor.update_parameters( - mem_gb=40 * num_gpus_per_node, - gpus_per_node=num_gpus_per_node, - tasks_per_node=num_gpus_per_node, # one task per GPU - cpus_per_task=10, - nodes=nodes, - timeout_min=timeout_min, # max is 60 * 72 - # Below are cluster dependent parameters - slurm_partition=partition, - slurm_signal_delay_s=120, - **kwargs - ) - - executor.update_parameters(name="lavila_ft") - - args.dist_url = get_init_file().as_uri() - args.output_dir = args.job_dir - - trainer = Trainer(args) - job = executor.submit(trainer) - - print("Submitted job_id:", job.job_id) - - -if __name__ == "__main__": - main() diff --git a/spaces/nateraw/yolov6/yolov6/solver/build.py b/spaces/nateraw/yolov6/yolov6/solver/build.py deleted file mode 100644 index 0684ff7bfae7db248b29850d8ed2e8a33ff623b1..0000000000000000000000000000000000000000 --- a/spaces/nateraw/yolov6/yolov6/solver/build.py +++ /dev/null @@ -1,42 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding:utf-8 -*- -import os -import math - -import torch -import torch.nn as nn - - -def build_optimizer(cfg, model): - """ Build optimizer from cfg file.""" - g_bnw, g_w, g_b = [], [], [] - for v in model.modules(): - if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter): - g_b.append(v.bias) - if isinstance(v, nn.BatchNorm2d): - g_bnw.append(v.weight) - elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter): - g_w.append(v.weight) - - assert cfg.solver.optim == 'SGD' or 'Adam', 'ERROR: unknown optimizer, use SGD defaulted' - if cfg.solver.optim == 'SGD': - optimizer = torch.optim.SGD(g_bnw, lr=cfg.solver.lr0, momentum=cfg.solver.momentum, nesterov=True) - elif cfg.solver.optim == 'Adam': - optimizer = torch.optim.Adam(g_bnw, lr=cfg.solver.lr0, betas=(cfg.solver.momentum, 0.999)) - - optimizer.add_param_group({'params': g_w, 'weight_decay': cfg.solver.weight_decay}) - optimizer.add_param_group({'params': g_b}) - - del g_bnw, g_w, g_b - return optimizer - - -def build_lr_scheduler(cfg, optimizer, epochs): - """Build learning rate scheduler from cfg file.""" - if cfg.solver.lr_scheduler == 'Cosine': - lf = lambda x: ((1 - math.cos(x * math.pi / epochs)) / 2) * (cfg.solver.lrf - 1) + 1 - else: - LOGGER.error('unknown lr scheduler, use Cosine defaulted') - - scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lf) - return scheduler, lf diff --git a/spaces/nathanTQ/ChatDev/camel/model_backend.py b/spaces/nathanTQ/ChatDev/camel/model_backend.py deleted file mode 100644 index 6d95dc562bbe34438acc8548fc5f5015dda08c1d..0000000000000000000000000000000000000000 --- a/spaces/nathanTQ/ChatDev/camel/model_backend.py +++ /dev/null @@ -1,127 +0,0 @@ -# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. =========== -# Licensed under the Apache License, Version 2.0 (the “License”); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an “AS IS” BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. =========== -from abc import ABC, abstractmethod -from typing import Any, Dict - -import openai -import tiktoken - -from camel.typing import ModelType -from chatdev.utils import log_and_print_online - - -class ModelBackend(ABC): - r"""Base class for different model backends. - May be OpenAI API, a local LLM, a stub for unit tests, etc.""" - - @abstractmethod - def run(self, *args, **kwargs) -> Dict[str, Any]: - r"""Runs the query to the backend model. - - Raises: - RuntimeError: if the return value from OpenAI API - is not a dict that is expected. - - Returns: - Dict[str, Any]: All backends must return a dict in OpenAI format. - """ - pass - - -class OpenAIModel(ModelBackend): - r"""OpenAI API in a unified ModelBackend interface.""" - - def __init__(self, model_type: ModelType, model_config_dict: Dict) -> None: - super().__init__() - self.model_type = model_type - self.model_config_dict = model_config_dict - - def run(self, *args, **kwargs) -> Dict[str, Any]: - string = "\n".join([message["content"] for message in kwargs["messages"]]) - encoding = tiktoken.encoding_for_model(self.model_type.value) - num_prompt_tokens = len(encoding.encode(string)) - gap_between_send_receive = 50 # known issue - num_prompt_tokens += gap_between_send_receive - - num_max_token_map = { - "gpt-3.5-turbo": 4096, - "gpt-3.5-turbo-16k": 16384, - "gpt-3.5-turbo-0613": 4096, - "gpt-3.5-turbo-16k-0613": 16384, - "gpt-4": 8192, - "gpt-4-0613": 8192, - "gpt-4-32k": 32768, - } - num_max_token = num_max_token_map[self.model_type.value] - num_max_completion_tokens = num_max_token - num_prompt_tokens - self.model_config_dict['max_tokens'] = num_max_completion_tokens - response = openai.ChatCompletion.create(*args, **kwargs, - model=self.model_type.value, - **self.model_config_dict) - - log_and_print_online( - "**[OpenAI_Usage_Info Receive]**\nprompt_tokens: {}\ncompletion_tokens: {}\ntotal_tokens: {}\n".format( - response["usage"]["prompt_tokens"], response["usage"]["completion_tokens"], - response["usage"]["total_tokens"])) - if not isinstance(response, Dict): - raise RuntimeError("Unexpected return from OpenAI API") - return response - - -class StubModel(ModelBackend): - r"""A dummy model used for unit tests.""" - - def __init__(self, *args, **kwargs) -> None: - super().__init__() - - def run(self, *args, **kwargs) -> Dict[str, Any]: - ARBITRARY_STRING = "Lorem Ipsum" - - return dict( - id="stub_model_id", - usage=dict(), - choices=[ - dict(finish_reason="stop", - message=dict(content=ARBITRARY_STRING, role="assistant")) - ], - ) - - -class ModelFactory: - r"""Factory of backend models. - - Raises: - ValueError: in case the provided model type is unknown. - """ - - @staticmethod - def create(model_type: ModelType, model_config_dict: Dict) -> ModelBackend: - default_model_type = ModelType.GPT_3_5_TURBO - - if model_type in { - ModelType.GPT_3_5_TURBO, ModelType.GPT_4, ModelType.GPT_4_32k, - None - }: - model_class = OpenAIModel - elif model_type == ModelType.STUB: - model_class = StubModel - else: - raise ValueError("Unknown model") - - if model_type is None: - model_type = default_model_type - - # log_and_print_online("Model Type: {}".format(model_type)) - inst = model_class(model_type, model_config_dict) - return inst diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ableton Live Suite 9.7.5 Crack For Windows - [CrackzSoft] Download WORK.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ableton Live Suite 9.7.5 Crack For Windows - [CrackzSoft] Download WORK.md deleted file mode 100644 index 011fc2c0731db7f533be0892b32579d78dfd39f1..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ableton Live Suite 9.7.5 Crack For Windows - [CrackzSoft] Download WORK.md +++ /dev/null @@ -1,45 +0,0 @@ -
              -

              Ableton Live Suite 9.7.5 Crack For Windows - [CrackzSoft] Download: A Complete Guide

              - -

              If you are looking for a powerful software to create and perform music live on stage, you might have heard of Ableton Live Suite 9.7.5. This is a digital audio workstation (DAW) that allows you to record, edit, mix and master your musical ideas with ease and flexibility. But what if you don't have the budget to buy the full version of this software? Is there a way to get it for free?

              - -

              In this article, we will show you how to download and install Ableton Live Suite 9.7.5 Crack For Windows - [CrackzSoft] Download, a cracked version of the software that bypasses the activation process and lets you use all the features without paying a dime. We will also explain the benefits and risks of using a cracked software, and how to avoid malware and viruses that might come with it.

              -

              Ableton Live Suite 9.7.5 Crack For Windows - [CrackzSoft] Download


              Download ->->->-> https://urlcod.com/2uIaOQ



              - -

              What is Ableton Live Suite 9.7.5?

              - -

              Ableton Live Suite 9.7.5 is the latest version of Ableton Live, a software for creating musical ideas, turning them into finished songs, and even taking them onto the stage. It is designed for use in live performance as well as for production, and it offers a unique workflow that lets you freely and independently start and stop any number of audio or MIDI loops in real-time, without interrupting your creative flow.

              - -

              Ableton Live Suite 9.7.5 comes with a lot of features and tools that make it a complete solution for music creation and performance. Some of these features are:

              - -
                -
              • Multitrack recording up to 32-bit/192 kHz
              • -
              • Advanced warping and real-time time stretching
              • -
              • Unlimited instruments, audio effects and MIDI effects per project
              • -
              • VST and Audio Unit support
              • -
              • Group tracks and MIDI Clock/sync
              • -
              • Nondestructive editing with unlimited undo
              • -
              • Powerful MIDI sequencing of software and hardware instruments
              • -
              • ReWire, Time signature changes, and Track Freeze
              • -
              • MIDI output to hardware synths
              • -
              • MIDI remote control instant mapping
              • -
              • WAV, AIFF, MP3, Ogg Vorbis, FLAC file support
              • -
              • And much more...
              • -
              - -

              Ableton Live Suite 9.7.5 also comes with a collection of instruments, sounds, samples and loops that you can use to create any kind of music genre you want. You can also customize your own sounds and effects with the built-in synthesizers, samplers, drum machines and audio effects.

              - -

              How to Download and Install Ableton Live Suite 9.7.5 Crack For Windows - [CrackzSoft] Download?

              - -

              If you want to get Ableton Live Suite 9.7.5 for free, you will need to download a cracked version of the software from a reliable source. A cracked version is a modified version of the software that has been hacked or patched to bypass the activation process and make it work without a license key or serial number.

              - -

              One of the sources that offer Ableton Live Suite 9.7.5 Crack For Windows - [CrackzSoft] Download is [^1^], a website that provides free downloads of various software and games for Windows and Mac OS X. Here are the steps to download and install Ableton Live Suite 9.7.5 Crack For Windows - [CrackzSoft] Download from this website:

              - -
                -
              1. Go to [^1^] and search for "Ableton Live Suite 9.7.5" in the search box.
              2. -
              3. Select the first result that says "Ableton Live Suite 9.7.5 Free Download".
              4. -
              5. Scroll down to the bottom of the page and click on the green button that says "Download Now".
              6. -
              7. You will be redirected to another page where you will need to complete a captcha verification to prove that you

                -

                7b8c122e87
                -
                -
                \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Adobe Framemaker 11 Amtlib Dll.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Adobe Framemaker 11 Amtlib Dll.md deleted file mode 100644 index 5fc1fb525bae184dc453d658f9ad915e7f801d39..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Adobe Framemaker 11 Amtlib Dll.md +++ /dev/null @@ -1,36 +0,0 @@ -
                -

                How to Fix Adobe FrameMaker 11 Amtlib.dll Error

                -

                If you are using Adobe FrameMaker 11, you may encounter an error message that says "amtlib.dll is either not designed to run on Windows or it contains an error. Try installing the program again using the original installation media or contact your system administrator or the software vender for support."[^3^]

                -

                adobe framemaker 11 amtlib dll


                DOWNLOAD ✶✶✶ https://urlcod.com/2uI9Vl



                -

                This error may occur due to corrupted or missing dll files, incompatible versions of FrameMaker and Windows, or other system issues. In this article, we will show you some possible solutions to fix this error and restore your FrameMaker functionality.

                -

                Solution 1: Repair or Reinstall FrameMaker 11

                -

                One of the simplest ways to fix the amtlib.dll error is to repair or reinstall FrameMaker 11. This will ensure that you have the latest and compatible version of the software and its components. To do this, follow these steps:

                -
                  -
                1. Open Programs & Features in Control Panel.
                2. -
                3. Find Adobe FrameMaker 11 in the list of installed programs and select it.
                4. -
                5. Click on Change/Uninstall and choose Repair or Reinstall from the options.
                6. -
                7. Follow the on-screen instructions to complete the process.
                8. -
                9. Restart your computer and launch FrameMaker 11 again.
                10. -
                -

                Solution 2: Download and Replace Amtlib.dll File

                -

                If repairing or reinstalling FrameMaker 11 does not work, you can try to download and replace the amtlib.dll file manually. However, this is not recommended as it may cause further problems if you download a wrong or malicious file. You should only do this if you are confident about the source and compatibility of the file. To do this, follow these steps:

                -
                  -
                1. Find a reliable website that offers dll file downloads, such as https://www.dll-files.com/amtlib.dll.html[^3^]. Make sure you download the file that matches your FrameMaker 11 version and Windows system.
                2. -
                3. Extract the downloaded file and copy it to a safe location.
                4. -
                5. Navigate to the FrameMaker 11 install location. The default install location is: C:\\Program Files\\Adobe\\Adobe FrameMaker 11[^5^].
                6. -
                7. Find and rename the existing amtlib.dll file to something else, such as amtlib.dll.old.
                8. -
                9. Paste the new amtlib.dll file into the same folder.
                10. -
                11. Restart your computer and launch FrameMaker 11 again.
                12. -
                -

                Solution 3: Update FrameMaker 11 to a Newer Version

                -

                If none of the above solutions work, you may need to update FrameMaker 11 to a newer version that is compatible with your Windows system and has fixed the critical vulnerabilities that may cause the amtlib.dll error. To do this, follow these steps:

                -

                -
                  -
                1. Visit https://www.adobe.com/products/framemaker.html[^4^] and choose a plan that suits your needs. You can also request a free trial or a callback from Adobe customer support.
                2. -
                3. Download and install the latest version of FrameMaker according to the instructions provided by Adobe.
                4. -
                5. Enter your serial number or sign up for a subscription to activate the product.
                6. -
                7. Launch FrameMaker and enjoy its features.
                8. -
                -

                We hope this article has helped you fix the Adobe FrameMaker 11 amtlib.dll error. If you have any questions or feedback, please let us know in the comments below.

                81aa517590
                -
                -
                \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Download Elcomsoft Wireless Security Auditor Full Crack [BETTER].md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Download Elcomsoft Wireless Security Auditor Full Crack [BETTER].md deleted file mode 100644 index 094dbfb9712d3cc6febe3d051b95cd31854aaaa2..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Download Elcomsoft Wireless Security Auditor Full Crack [BETTER].md +++ /dev/null @@ -1,35 +0,0 @@ -
                -

                How to Use Elcomsoft Wireless Security Auditor Full to Test Your Wi-Fi Network Security

                - -

                Wi-Fi networks are ubiquitous and convenient, but they also pose a security risk. If your Wi-Fi network is not properly secured, hackers can easily access your data, devices, and online accounts. That's why you need to test your Wi-Fi network security regularly and fix any vulnerabilities that you find.

                - -

                One of the tools that you can use to test your Wi-Fi network security is Elcomsoft Wireless Security Auditor Full. This is a powerful and comprehensive software that can perform various types of attacks on your Wi-Fi network, such as:

                -

                download elcomsoft wireless security auditor full crack


                DOWNLOADhttps://urlcod.com/2uIaq5



                - -
                  -
                • Brute-force attacks: trying different combinations of passwords until the correct one is found.
                • -
                • Dictionary attacks: trying passwords from a predefined list of common or likely words.
                • -
                • Mask attacks: trying passwords that match a certain pattern or format.
                • -
                • Hybrid attacks: combining different methods of password guessing.
                • -
                • WPS attacks: exploiting the Wi-Fi Protected Setup feature that allows devices to connect to the network without entering a password.
                • -
                - -

                Elcomsoft Wireless Security Auditor Full can also analyze the security of your Wi-Fi network by checking the encryption type, the signal strength, the number of connected devices, and other parameters. It can also generate reports and recommendations on how to improve your Wi-Fi network security.

                - -

                To use Elcomsoft Wireless Security Auditor Full, you need to have a compatible wireless adapter that supports monitor mode and packet injection. You also need to have a license key to activate the full version of the software. You can download the trial version of Elcomsoft Wireless Security Auditor Full from here and purchase the license key from here.

                - -

                Once you have installed and activated Elcomsoft Wireless Security Auditor Full, you can follow these steps to test your Wi-Fi network security:

                - -
                  -
                1. Launch Elcomsoft Wireless Security Auditor Full and click on the "New Project" button.
                2. -
                3. Select your wireless adapter from the list and click on the "Start Scan" button.
                4. -
                5. Wait for the scan to finish and select your target Wi-Fi network from the list.
                6. -
                7. Click on the "Attack" button and choose the type of attack that you want to perform.
                8. -
                9. Configure the attack settings according to your preferences and click on the "Start" button.
                10. -
                11. Wait for the attack to finish and check if the password of your target Wi-Fi network has been cracked.
                12. -
                13. If the password has been cracked, change it immediately and follow the recommendations on how to improve your Wi-Fi network security.
                14. -
                - -

                By using Elcomsoft Wireless Security Auditor Full, you can test your Wi-Fi network security and prevent hackers from accessing your data, devices, and online accounts. Remember to test your Wi-Fi network security regularly and keep your software updated to ensure optimal protection.

                81aa517590
                -
                -
                \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Dwg To Pdf Converter Mx Serial Key.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Dwg To Pdf Converter Mx Serial Key.md deleted file mode 100644 index 77785b50825089907da7b54130ab0b1c31652654..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Dwg To Pdf Converter Mx Serial Key.md +++ /dev/null @@ -1,125 +0,0 @@ - -

                DWG to PDF Converter MX Serial Key: How to Convert Your CAD Files Easily and Securely

                -

                If you are looking for a simple and effective way to convert your CAD files from DWG, DXF, or DWF format to PDF format, you might want to try DWG to PDF Converter MX. This is a powerful software that allows you to batch convert your CAD files into high-quality PDF files with various options and features. In this article, we will show you how to download, install, and use DWG to PDF Converter MX with a valid serial key. We will also answer some of the most frequently asked questions about this software and provide some tips and tricks on how to troubleshoot common issues and errors.

                -

                What is DWG to PDF Converter MX?

                -

                DWG to PDF Converter MX is a stand-alone software that does not require AutoCAD or any other CAD software. It can convert DWG, DXF, and DWF files into PDF files quickly and easily. It supports all versions of DWG, DXF, and DWF formats from R2.5-2021. It also supports AutoCAD pen sets file (*.ctb) and OLE entity (such as inline Word, Excel document objects in the DWG files).

                -

                Dwg To Pdf Converter Mx Serial Key


                Download File ····· https://urlcod.com/2uIcwb



                -

                Features and benefits of DWG to PDF Converter MX

                -

                Some of the main features and benefits of using DWG to PDF Converter MX are:

                -
                  -
                • You can set the page size directly or select the predefined page size (such as A4, A3, Letter, etc.)
                • -
                • You can adjust the size of output pages with its layout setting automatically.
                • -
                • You can export layer and raster image object to PDF.
                • -
                • You can export the arc/circle objects to true arc/circle objects of PDF.
                • -
                • You can set the quality of PDF with DPI parameter.
                • -
                • You can encrypt the outputted PDF files with password protection.
                • -
                • You can customize the watermark with text or image.
                • -
                • You can create bookmarks automatically with layout name and file name.
                • -
                • You can convert model space, all layouts, all paper space, or last active layout to PDF file.
                • -
                • You can export pure text format or compressed format PDF file.
                • -
                • You can adjust the generating order of DWG drawing files.
                • -
                • You can support text entity searchable and hyperlink.
                • -
                -

                How to download and install DWG to PDF Converter MX

                -

                To download and install DWG to PDF Converter MX, you need to follow these steps:

                -
                  -
                1. Go to the official website of DWG to PDF Converter MX and click on the "Download" button.
                2. -
                3. Save the setup file (dwg2pdfmx.exe) to your computer and run it.
                4. -
                5. Follow the instructions on the screen to complete the installation process.
                6. -
                7. Launch the software and enter your serial key when prompted.
                8. -
                -

                If you do not have a serial key, you can use the trial version for 15 days with some limitations. You can also purchase a full version from the website or contact the support team for more information.

                -

                What is a serial key and why do you need it?

                -

                A serial key is a unique code that is used to activate and register a software product. It usually consists of a combination of letters and numbers that is provided by the software developer or vendor. A serial key is also known as a product key, license key, activation key, or registration key.

                -

                The difference between a trial version and a full version

                -

                A trial version is a free version of a software product that allows you to test its features and functions for a limited period of time. A trial version usually has some restrictions or limitations, such as watermark, file size, output quality, or number of conversions. A trial version is intended to give you an idea of how the software works and whether it meets your needs and expectations.

                -

                A full version is a paid version of a software product that gives you access to all its features and functions without any restrictions or limitations. A full version also provides you with technical support and updates from the software developer or vendor. A full version is intended to give you the best user experience and satisfaction with the software.

                -

                How to get a valid serial key for DWG to PDF Converter MX

                -

                To get a valid serial key for DWG to PDF Converter MX, you need to purchase a full version from the official website of DWG to PDF Converter MX. The price of the full version is $99.50 USD for one user license. You can also get discounts for multiple user licenses or volume licenses. You can pay by credit card, PayPal, bank transfer, or other methods.

                -

                After you complete your payment, you will receive an email with your serial key and download link. You can also find your serial key in your account on the website. You need to enter your serial key in the software to activate and register it. You can use your serial key on one computer only. If you want to use it on another computer, you need to uninstall it from the first computer and install it on the second computer.

                -

                -

                How to use DWG to PDF Converter MX to convert your CAD files

                -

                Using DWG to PDF Converter MX to convert your CAD files is very easy and fast. You just need to follow these steps:

                -

                Step-by-step guide on how to convert DWG, DXF, and DWF files to PDF

                -
                  -
                1. Launch DWG to PDF Converter MX and click on the "Add Files" button to add the CAD files that you want to convert. You can also drag and drop the files into the software window.
                2. -
                3. Select the output folder where you want to save the converted PDF files.
                4. -
                5. Click on the "Options" button to customize your output settings, such as page size, layout, quality, encryption, watermark, bookmarks, etc.
                6. -
                7. Click on the "Convert Now" button to start the conversion process. You can see the progress and status of each file in the software window.
                8. -
                9. When the conversion is done, you can open the output folder and view the converted PDF files with any PDF viewer or editor.
                10. -
                -

                Tips and tricks on how to customize your output settings and optimize your conversion results

                -

                Here are some tips and tricks that can help you customize your output settings and optimize your conversion results:

                -
                  -
                • If you want to convert only a specific part of a CAD file, you can use the "Clip" function in the software. You can select an area or a window in the CAD file and convert it into a PDF file.
                • -
                • If you want to merge multiple CAD files into one PDF file, you can use the "Combine" function in the software. You can select several CAD files and combine them into one PDF file with bookmarks.
                • -
                • If you want to split a large CAD file into smaller PDF files, you can use the "Split" function in the software. You can split a CAD file by page number, file size, or layout name.
                • -
                • If you want to add a table to your PDF file, you can use the "Table" function in the software. You can create a table with rows and columns and insert data into it. You can also adjust the font, color, border, and alignment of the table.
                • -
                • If you want to add an image to your PDF file, you can use the "Image" function in the software. You can insert an image from your computer or from a URL. You can also resize, rotate, crop, and flip the image.
                • -
                • If you want to add a text to your PDF file, you can use the "Text" function in the software. You can type or paste any text into the PDF file. You can also change the font, size, color, style, and alignment of the text.
                • -
                -

                How to troubleshoot common issues and errors with DWG to PDF Converter MX

                -

                Although DWG to PDF Converter MX is a reliable and stable software, you may encounter some issues and errors while using it. Here are some of the common problems and their solutions:

                -

                How to fix invalid serial key errors

                -

                If you get an error message that says "Invalid serial key" or "Serial key expired", it means that your serial key is not valid or has expired. This may happen for several reasons, such as:

                -
                  -
                • You have entered the wrong serial key or made a typo.
                • -
                • You have used the same serial key on more than one computer.
                • -
                • You have changed your computer hardware or operating system.
                • -
                • You have downloaded a pirated or cracked version of the software.
                • -
                -

                To fix this problem, you need to do the following:

                -
                  -
                • Make sure that you have entered the correct serial key without any spaces or extra characters.
                • -
                • Make sure that you have purchased a full version from the official website of DWG to PDF Converter MX and not from any other sources.
                • -
                • Make sure that you have uninstalled the software from your previous computer before installing it on a new one.
                • -
                • Contact the support team of DWG to PDF Converter MX and provide them with your order information and serial key. They will help you activate and register your software.
                • -
                -

                How to solve compatibility and performance issues

                -

                If you experience any compatibility or performance issues with DWG to PDF Converter MX, such as crashing, freezing, slow conversion, or poor output quality, it may be due to some factors, such as:

                -
                  -
                • Your computer does not meet the minimum system requirements for DWG to PDF Converter MX.
                • -
                • Your CAD files are corrupted, damaged, or protected by passwords or encryption.
                • -
                • Your output settings are too high or too low for your PDF files.
                • -
                • Your antivirus or firewall software is blocking or interfering with DWG to PDF Converter MX.
                • -
                -

                To solve this problem, you need to do the following:

                -
                  -
                • Check the system requirements for DWG to PDF Converter MX and make sure that your computer meets them. The system requirements are:
                • - - - - - - - -
                  Operating SystemWindows XP/Vista/7/8/10 (32-bit and 64-bit)
                  ProcessorPentium III 1500 MHz or higher
                  Memory512 MB RAM or more
                  Disk Space100 MB free hard disk space or more
                  Display1024 x 768 resolution or higher
                  Internet ConnectionRequired for activation and updates
                  -
                • Scan your CAD files with an antivirus software and repair them with a CAD repair tool if they are corrupted, damaged, or protected.
                • -
                • Adjust your output settings according to your needs and preferences. You can lower the DPI parameter, reduce the page size, compress the output format, or remove unnecessary elements from your PDF files.
                • -
                • Disable or whitelist your antivirus or firewall software for DWG to PDF Converter MX. You can also temporarily turn off your internet connection while using the software.
                • -
                -

                Conclusion

                -

                DWG to PDF Converter MX is a great software that can help you convert your CAD files from DWG, DXF, or DWF format to PDF format easily and securely. It has many features and benefits that can enhance your conversion results and user experience. It is also easy to download, install, and use with a valid serial key. If you have any questions or issues with the software, you can contact the support team of DWG to PDF Converter MX for assistance.

                -

                If you want to try DWG to PDF Converter MX for yourself, you can download the trial version from the official website of DWG to PDF Converter MX and use it for 15 days. If you are satisfied with the software, you can purchase the full version and get a serial key to activate and register it. You can also get discounts for multiple user licenses or volume licenses.

                -

                DWG to PDF Converter MX is the best solution for converting your CAD files to PDF files. Don't miss this opportunity and get your copy today!

                -

                FAQs

                -

                What are the system requirements for DWG to PDF Converter MX?

                -

                The system requirements for DWG to PDF Converter MX are:

                - - - - - - - -
                Operating SystemWindows XP/Vista/7/8/10 (32-bit and 64-bit)
                ProcessorPentium III 1500 MHz or higher
                Memory512 MB RAM or more
                Disk Space100 MB free hard disk space or more
                Display1024 x 768 resolution or higher
                Internet ConnectionRequired for activation and updates
                -

                How much does DWG to PDF Converter MX cost?

                -

                The price of DWG to PDF Converter MX is $99.50 USD for one user license. You can also get discounts for multiple user licenses or volume licenses. You can pay by credit card, PayPal, bank transfer, or other methods.

                -

                Is DWG to PDF Converter MX safe and reliable?

                -

                Yes, DWG to PDF Converter MX is safe and reliable. It does not contain any viruses, malware, spyware, or adware. It does not modify or damage your original CAD files. It does not collect or share any of your personal or confidential information. It is certified by several reputable software review sites and trusted by thousands of users worldwide.

                -

                Can I convert multiple CAD files at once with DWG to PDF Converter MX?

                -

                Yes, you can convert multiple CAD files at once with DWG to PDF Converter MX. You can add as many CAD files as you want to the software and batch convert them into PDF files with one click. You can also set different output settings for each CAD file or apply the same settings to all of them.

                -

                How can I contact the support team of DWG to PDF Converter MX?

                -

                If you have any questions, issues, feedback, or suggestions about DWG to PDF Converter MX, you can contact the support team by email at support@dwgtool.com. They will reply to you within 24 hours and provide you with professional and friendly assistance.

                b2dd77e56b
                -
                -
                \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key Keygen Free.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key Keygen Free.md deleted file mode 100644 index b34676d2a3c1eab964f283a12d7a34beaa79afb9..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key Keygen Free.md +++ /dev/null @@ -1,28 +0,0 @@ - -```html -

                Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key keygen: Learn Any Language Easily and Quickly

                -

                Do you want to learn a new language, but don't have the time or money to enroll in a course? Do you wish you could speak fluently with native speakers, without feeling embarrassed or frustrated? If you answered yes, then you need Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key keygen.

                -

                Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key keygen


                Download ––– https://urlcod.com/2uIaTG



                -

                Rosetta Stone is the world's leading software for learning languages. It uses a natural and intuitive method that helps you learn through immersion, just like you learned your first language. You will listen, speak, read and write in your new language, without memorizing rules or translations. You will also get feedback and guidance from Rosetta Stone's advanced speech recognition technology, which helps you improve your pronunciation and accent.

                -

                With Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key keygen, you can access all the features and benefits of Rosetta Stone, without paying a dime. You can choose from over 30 languages, including English, Spanish, French, German, Italian, Chinese, Japanese and more. You can also customize your learning plan, track your progress and sync your lessons across your devices.

                -

                But how can you get Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key keygen? It's simple. Just follow these steps:

                -
                  -
                1. Download Rosetta Stone 4.1.15 from the official website or from any trusted source.
                2. -
                3. Install the software on your computer.
                4. -
                5. Download the crack file from this link: https://www.example.com/crack.zip
                6. -
                7. Unzip the crack file and copy the contents to the installation folder of Rosetta Stone.
                8. -
                9. Run the crack file and generate a serial key.
                10. -
                11. Enter the serial key when prompted by Rosetta Stone.
                12. -
                13. Enjoy learning any language with Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key keygen!
                14. -
                -

                Don't miss this opportunity to learn any language easily and quickly with Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key keygen. Download it now and start your journey to fluency!

                -``` - -```html -

                Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key keygen is the best way to learn any language at your own pace and convenience. You don't need to worry about deadlines, schedules, or exams. You can learn whenever and wherever you want, whether it's at home, in the office, or on the go. You can also switch between languages and levels as you wish, without losing your progress.

                -

                Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key keygen is also the most fun and engaging way to learn any language. You will not get bored or frustrated with boring drills or exercises. Instead, you will enjoy interactive and immersive activities that will keep you motivated and interested. You will also learn from real-life scenarios and situations that will prepare you for real-world conversations and interactions.

                -

                -

                Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key keygen is the ultimate solution for anyone who wants to learn any language easily and quickly. It is trusted by millions of learners and educators around the world, who have achieved amazing results with Rosetta Stone. Whether you want to learn a new language for personal, professional, or academic reasons, Rosetta Stone 4.1.15 Crack(VasiaZozulia) Serial Key keygen will help you achieve your goals.

                -```

                cec2833e83
                -
                -
                \ No newline at end of file diff --git a/spaces/nickil/weakly-supervised-parsing/weakly_supervised_parser/utils/cky_algorithm.py b/spaces/nickil/weakly-supervised-parsing/weakly_supervised_parser/utils/cky_algorithm.py deleted file mode 100644 index 1de174b5b160c5288fbb9f738e58c694791f29c3..0000000000000000000000000000000000000000 --- a/spaces/nickil/weakly-supervised-parsing/weakly_supervised_parser/utils/cky_algorithm.py +++ /dev/null @@ -1,91 +0,0 @@ -import re -import numpy as np -from weakly_supervised_parser.tree.helpers import Tree - - -def CKY(sent_all, prob_s, label_s, verbose=False): - r""" - choose tree with maximum expected number of constituents, - or max \sum_{(i,j) \in tree} p((i,j) is constituent) - """ - - def backpt_to_tree(sent, backpt, label_table): - def to_tree(i, j): - if j - i == 1: - return Tree(sent[i], None, sent[i]) - else: - k = backpt[i][j] - return Tree(label_table[i][j], [to_tree(i, k), to_tree(k, j)], None) - - return to_tree(0, len(sent)) - - def to_table(value_s, i_s, j_s): - table = [[None for _ in range(np.max(j_s) + 1)] for _ in range(np.max(i_s) + 1)] - for value, i, j in zip(value_s, i_s, j_s): - table[i][j] = value - return table - - # produce list of spans to pass to is_constituent, while keeping track of which sentence - sent_s, i_s, j_s = [], [], [] - idx_all = [] - for sent in sent_all: - start = len(sent_s) - for i in range(len(sent)): - for j in range(i + 1, len(sent) + 1): - sent_s.append(sent) - i_s.append(i) - j_s.append(j) - idx_all.append((start, len(sent_s))) - - # feed spans to is_constituent - # prob_s, label_s = self.is_constituent(sent_s, i_s, j_s, verbose = verbose) - - # given span probs, perform CKY to get best tree for each sentence. - tree_all, prob_all = [], [] - for sent, idx in zip(sent_all, idx_all): - # first, use tables to keep track of things - k, l = idx - prob, label = prob_s[k:l], label_s[k:l] - i, j = i_s[k:l], j_s[k:l] - - prob_table = to_table(prob, i, j) - label_table = to_table(label, i, j) - - # perform cky using scores and backpointers - score_table = [[None for _ in range(len(sent) + 1)] for _ in range(len(sent))] - backpt_table = [[None for _ in range(len(sent) + 1)] for _ in range(len(sent))] - for i in range(len(sent)): # base case: single words - score_table[i][i + 1] = 1 - for j in range(2, len(sent) + 1): - for i in range(j - 2, -1, -1): - best, argmax = -np.inf, None - for k in range(i + 1, j): # find splitpoint - score = score_table[i][k] + score_table[k][j] - if score > best: - best, argmax = score, k - score_table[i][j] = best + prob_table[i][j] - backpt_table[i][j] = argmax - - tree = backpt_to_tree(sent, backpt_table, label_table) - tree_all.append(tree) - prob_all.append(prob_table) - - return tree_all, prob_all - - -def get_best_parse(sentence, spans): - flattened_scores = [] - for i in range(spans.shape[0]): - for j in range(spans.shape[1]): - if i > j: - continue - else: - flattened_scores.append(spans[i, j]) - prob_s, label_s = flattened_scores, ["S"] * len(flattened_scores) - # print(prob_s, label_s) - trees, _ = CKY(sent_all=sentence, prob_s=prob_s, label_s=label_s) - s = str(trees[0]) - # Replace previous occurrence of string - out = re.sub(r"(? 0.5) - gt_boxes1 = torch.tensor([[1, 5, 2, 8], [7, 3, 10, 5]], dtype=torch.float32) - gt_instance1 = Instances(image_shape) - gt_instance1.gt_boxes = Boxes(gt_boxes1) - gt_instance1.gt_classes = torch.tensor([1, 2]) - gt_instance1.gt_masks = BitMasks(torch.rand((2,) + image_shape) > 0.5) - gt_instances = [gt_instance0, gt_instance1] - - proposal_generator = build_proposal_generator(cfg, feature_shape) - roi_heads = StandardROIHeads(cfg, feature_shape) - - with EventStorage(): # capture events in a new storage to discard them - proposals, proposal_losses = proposal_generator(images, features, gt_instances) - _, detector_losses = roi_heads(images, features, proposals, gt_instances) - - detector_losses.update(proposal_losses) - expected_losses = { - "loss_cls": 4.5253729820251465, - "loss_box_reg": 0.009785720147192478, - "loss_mask": 0.693184494972229, - "loss_rpn_cls": 0.08186662942171097, - "loss_rpn_loc": 0.1104838103055954, - } - succ = all( - torch.allclose(detector_losses[name], torch.tensor(expected_losses.get(name, 0.0))) - for name in detector_losses.keys() - ) - self.assertTrue( - succ, - "Losses has changed! New losses: {}".format( - {k: v.item() for k, v in detector_losses.items()} - ), - ) - - def test_rroi_heads(self): - torch.manual_seed(121) - cfg = get_cfg() - cfg.MODEL.PROPOSAL_GENERATOR.NAME = "RRPN" - cfg.MODEL.ANCHOR_GENERATOR.NAME = "RotatedAnchorGenerator" - cfg.MODEL.ROI_HEADS.NAME = "RROIHeads" - cfg.MODEL.ROI_BOX_HEAD.NAME = "FastRCNNConvFCHead" - cfg.MODEL.ROI_BOX_HEAD.NUM_FC = 2 - cfg.MODEL.RPN.BBOX_REG_WEIGHTS = (1, 1, 1, 1, 1) - cfg.MODEL.RPN.HEAD_NAME = "StandardRPNHead" - cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE = "ROIAlignRotated" - cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_WEIGHTS = (10, 10, 5, 5, 1) - num_images = 2 - images_tensor = torch.rand(num_images, 20, 30) - image_sizes = [(10, 10), (20, 30)] - images = ImageList(images_tensor, image_sizes) - num_channels = 1024 - features = {"res4": torch.rand(num_images, num_channels, 1, 2)} - feature_shape = {"res4": ShapeSpec(channels=num_channels, stride=16)} - - image_shape = (15, 15) - gt_boxes0 = torch.tensor([[2, 2, 2, 2, 30], [4, 4, 4, 4, 0]], dtype=torch.float32) - gt_instance0 = Instances(image_shape) - gt_instance0.gt_boxes = RotatedBoxes(gt_boxes0) - gt_instance0.gt_classes = torch.tensor([2, 1]) - gt_boxes1 = torch.tensor([[1.5, 5.5, 1, 3, 0], [8.5, 4, 3, 2, -50]], dtype=torch.float32) - gt_instance1 = Instances(image_shape) - gt_instance1.gt_boxes = RotatedBoxes(gt_boxes1) - gt_instance1.gt_classes = torch.tensor([1, 2]) - gt_instances = [gt_instance0, gt_instance1] - - proposal_generator = build_proposal_generator(cfg, feature_shape) - roi_heads = build_roi_heads(cfg, feature_shape) - - with EventStorage(): # capture events in a new storage to discard them - proposals, proposal_losses = proposal_generator(images, features, gt_instances) - _, detector_losses = roi_heads(images, features, proposals, gt_instances) - - detector_losses.update(proposal_losses) - expected_losses = { - "loss_cls": 4.365657806396484, - "loss_box_reg": 0.0015851043863222003, - "loss_rpn_cls": 0.2427729219198227, - "loss_rpn_loc": 0.3646621108055115, - } - succ = all( - torch.allclose(detector_losses[name], torch.tensor(expected_losses.get(name, 0.0))) - for name in detector_losses.keys() - ) - self.assertTrue( - succ, - "Losses has changed! New losses: {}".format( - {k: v.item() for k, v in detector_losses.items()} - ), - ) - - def test_box_head_scriptability(self): - input_shape = ShapeSpec(channels=1024, height=14, width=14) - box_features = torch.randn(4, 1024, 14, 14) - - box_head = FastRCNNConvFCHead( - input_shape, conv_dims=[512, 512], fc_dims=[1024, 1024] - ).eval() - script_box_head = torch.jit.script(box_head) - - origin_output = box_head(box_features) - script_output = script_box_head(box_features) - self.assertTrue(torch.equal(origin_output, script_output)) - - def test_mask_head_scriptability(self): - input_shape = ShapeSpec(channels=1024) - mask_features = torch.randn(4, 1024, 14, 14) - - image_shapes = [(10, 10), (15, 15)] - pred_instance0 = Instances(image_shapes[0]) - pred_classes0 = torch.tensor([1, 2, 3], dtype=torch.int64) - pred_instance0.pred_classes = pred_classes0 - pred_instance1 = Instances(image_shapes[1]) - pred_classes1 = torch.tensor([4], dtype=torch.int64) - pred_instance1.pred_classes = pred_classes1 - - mask_head = MaskRCNNConvUpsampleHead( - input_shape, num_classes=80, conv_dims=[256, 256] - ).eval() - # pred_instance will be in-place changed during the inference - # process of `MaskRCNNConvUpsampleHead` - origin_outputs = mask_head(mask_features, deepcopy([pred_instance0, pred_instance1])) - - fields = {"pred_masks": torch.Tensor, "pred_classes": torch.Tensor} - with freeze_training_mode(mask_head), patch_instances(fields) as NewInstances: - sciript_mask_head = torch.jit.script(mask_head) - pred_instance0 = NewInstances.from_instances(pred_instance0) - pred_instance1 = NewInstances.from_instances(pred_instance1) - script_outputs = sciript_mask_head(mask_features, [pred_instance0, pred_instance1]) - - for origin_ins, script_ins in zip(origin_outputs, script_outputs): - assert_instances_allclose(origin_ins, script_ins, rtol=0) - - def test_keypoint_head_scriptability(self): - input_shape = ShapeSpec(channels=1024, height=14, width=14) - keypoint_features = torch.randn(4, 1024, 14, 14) - - image_shapes = [(10, 10), (15, 15)] - pred_boxes0 = torch.tensor([[1, 1, 3, 3], [2, 2, 6, 6], [1, 5, 2, 8]], dtype=torch.float32) - pred_instance0 = Instances(image_shapes[0]) - pred_instance0.pred_boxes = Boxes(pred_boxes0) - pred_boxes1 = torch.tensor([[7, 3, 10, 5]], dtype=torch.float32) - pred_instance1 = Instances(image_shapes[1]) - pred_instance1.pred_boxes = Boxes(pred_boxes1) - - keypoint_head = KRCNNConvDeconvUpsampleHead( - input_shape, num_keypoints=17, conv_dims=[512, 512] - ).eval() - origin_outputs = keypoint_head( - keypoint_features, deepcopy([pred_instance0, pred_instance1]) - ) - - fields = { - "pred_boxes": Boxes, - "pred_keypoints": torch.Tensor, - "pred_keypoint_heatmaps": torch.Tensor, - } - with freeze_training_mode(keypoint_head), patch_instances(fields) as NewInstances: - script_keypoint_head = torch.jit.script(keypoint_head) - pred_instance0 = NewInstances.from_instances(pred_instance0) - pred_instance1 = NewInstances.from_instances(pred_instance1) - script_outputs = script_keypoint_head( - keypoint_features, [pred_instance0, pred_instance1] - ) - - for origin_ins, script_ins in zip(origin_outputs, script_outputs): - assert_instances_allclose(origin_ins, script_ins, rtol=0) - - def test_StandardROIHeads_scriptability(self): - cfg = get_cfg() - cfg.MODEL.ROI_BOX_HEAD.NAME = "FastRCNNConvFCHead" - cfg.MODEL.ROI_BOX_HEAD.NUM_FC = 2 - cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE = "ROIAlignV2" - cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_WEIGHTS = (10, 10, 5, 5) - cfg.MODEL.MASK_ON = True - cfg.MODEL.ROI_HEADS.NMS_THRESH_TEST = 0.01 - cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.01 - num_images = 2 - images_tensor = torch.rand(num_images, 20, 30) - image_sizes = [(10, 10), (20, 30)] - images = ImageList(images_tensor, image_sizes) - num_channels = 1024 - features = {"res4": torch.rand(num_images, num_channels, 1, 2)} - feature_shape = {"res4": ShapeSpec(channels=num_channels, stride=16)} - - roi_heads = StandardROIHeads(cfg, feature_shape).eval() - - proposal0 = Instances(image_sizes[0]) - proposal_boxes0 = torch.tensor([[1, 1, 3, 3], [2, 2, 6, 6]], dtype=torch.float32) - proposal0.proposal_boxes = Boxes(proposal_boxes0) - proposal0.objectness_logits = torch.tensor([0.5, 0.7], dtype=torch.float32) - - proposal1 = Instances(image_sizes[1]) - proposal_boxes1 = torch.tensor([[1, 5, 2, 8], [7, 3, 10, 5]], dtype=torch.float32) - proposal1.proposal_boxes = Boxes(proposal_boxes1) - proposal1.objectness_logits = torch.tensor([0.1, 0.9], dtype=torch.float32) - proposals = [proposal0, proposal1] - - pred_instances, _ = roi_heads(images, features, proposals) - fields = { - "objectness_logits": torch.Tensor, - "proposal_boxes": Boxes, - "pred_classes": torch.Tensor, - "scores": torch.Tensor, - "pred_masks": torch.Tensor, - "pred_boxes": Boxes, - "pred_keypoints": torch.Tensor, - "pred_keypoint_heatmaps": torch.Tensor, - } - with freeze_training_mode(roi_heads), patch_instances(fields) as new_instances: - proposal0 = new_instances.from_instances(proposal0) - proposal1 = new_instances.from_instances(proposal1) - proposals = [proposal0, proposal1] - scripted_rot_heads = torch.jit.script(roi_heads) - scripted_pred_instances, _ = scripted_rot_heads(images, features, proposals) - - for instance, scripted_instance in zip(pred_instances, scripted_pred_instances): - assert_instances_allclose(instance, scripted_instance, rtol=0) - - def test_PointRend_mask_head_tracing(self): - cfg = model_zoo.get_config("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml") - point_rend.add_pointrend_config(cfg) - cfg.MODEL.ROI_HEADS.IN_FEATURES = ["p2", "p3"] - cfg.MODEL.ROI_MASK_HEAD.NAME = "PointRendMaskHead" - cfg.MODEL.ROI_MASK_HEAD.POOLER_TYPE = "" - cfg.MODEL.ROI_MASK_HEAD.POINT_HEAD_ON = True - chan = 256 - head = point_rend.PointRendMaskHead( - cfg, - { - "p2": ShapeSpec(channels=chan, stride=4), - "p3": ShapeSpec(channels=chan, stride=8), - }, - ) - - def gen_inputs(h, w, N): - p2 = torch.rand(1, chan, h, w) - p3 = torch.rand(1, chan, h // 2, w // 2) - boxes = random_boxes(N, max_coord=h) - return p2, p3, boxes - - class Wrap(nn.ModuleDict): - def forward(self, p2, p3, boxes): - features = { - "p2": p2, - "p3": p3, - } - inst = Instances((p2.shape[2] * 4, p2.shape[3] * 4)) - inst.pred_boxes = Boxes(boxes) - inst.pred_classes = torch.zeros(inst.__len__(), dtype=torch.long) - out = self.head(features, [inst])[0] - return out.pred_masks - - model = Wrap({"head": head}) - model.eval() - with torch.no_grad(), patch_builtin_len(): - traced = torch.jit.trace(model, gen_inputs(302, 208, 20)) - inputs = gen_inputs(100, 120, 30) - out_eager = model(*inputs) - out_trace = traced(*inputs) - self.assertTrue(torch.allclose(out_eager, out_trace)) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/ntt123/mnist-rnn/index.html b/spaces/ntt123/mnist-rnn/index.html deleted file mode 100644 index 048031260e442abc269ad1148e884a57e81b3b84..0000000000000000000000000000000000000000 --- a/spaces/ntt123/mnist-rnn/index.html +++ /dev/null @@ -1,14 +0,0 @@ - - - - - - mnist-rnn - - - - - - - - \ No newline at end of file diff --git a/spaces/owaiskha9654/Custom_Yolov7/models/yolo.py b/spaces/owaiskha9654/Custom_Yolov7/models/yolo.py deleted file mode 100644 index 95a019c6aeec8c3f1d582907d5fe7ff3ed6b9369..0000000000000000000000000000000000000000 --- a/spaces/owaiskha9654/Custom_Yolov7/models/yolo.py +++ /dev/null @@ -1,843 +0,0 @@ -import argparse -import logging -import sys -from copy import deepcopy - -sys.path.append('./') # to run '$ python *.py' files in subdirectories -logger = logging.getLogger(__name__) -import torch -from models.common import * -from models.experimental import * -from utils.autoanchor import check_anchor_order -from utils.general import make_divisible, check_file, set_logging -from utils.torch_utils import time_synchronized, fuse_conv_and_bn, model_info, scale_img, initialize_weights, \ - select_device, copy_attr -from utils.loss import SigmoidBin - -try: - import thop # for FLOPS computation -except ImportError: - thop = None - - -class Detect(nn.Module): - stride = None # strides computed during build - export = False # onnx export - end2end = False - include_nms = False - concat = False - - def __init__(self, nc=80, anchors=(), ch=()): # detection layer - super(Detect, self).__init__() - self.nc = nc # number of classes - self.no = nc + 5 # number of outputs per anchor - self.nl = len(anchors) # number of detection layers - self.na = len(anchors[0]) // 2 # number of anchors - self.grid = [torch.zeros(1)] * self.nl # init grid - a = torch.tensor(anchors).float().view(self.nl, -1, 2) - self.register_buffer('anchors', a) # shape(nl,na,2) - self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2) - self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv - - def forward(self, x): - # x = x.copy() # for profiling - z = [] # inference output - self.training |= self.export - for i in range(self.nl): - x[i] = self.m[i](x[i]) # conv - bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - - if not self.training: # inference - if self.grid[i].shape[2:4] != x[i].shape[2:4]: - self.grid[i] = self._make_grid(nx, ny).to(x[i].device) - y = x[i].sigmoid() - if not torch.onnx.is_in_onnx_export(): - y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy - y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - else: - xy, wh, conf = y.split((2, 2, self.nc + 1), 4) # y.tensor_split((2, 4, 5), 4) # torch 1.8.0 - xy = xy * (2. * self.stride[i]) + (self.stride[i] * (self.grid[i] - 0.5)) # new xy - wh = wh ** 2 * (4 * self.anchor_grid[i].data) # new wh - y = torch.cat((xy, wh, conf), 4) - z.append(y.view(bs, -1, self.no)) - - if self.training: - out = x - elif self.end2end: - out = torch.cat(z, 1) - elif self.include_nms: - z = self.convert(z) - out = (z, ) - elif self.concat: - out = torch.cat(z, 1) - else: - out = (torch.cat(z, 1), x) - - return out - - @staticmethod - def _make_grid(nx=20, ny=20): - yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)]) - return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float() - - def convert(self, z): - z = torch.cat(z, 1) - box = z[:, :, :4] - conf = z[:, :, 4:5] - score = z[:, :, 5:] - score *= conf - convert_matrix = torch.tensor([[1, 0, 1, 0], [0, 1, 0, 1], [-0.5, 0, 0.5, 0], [0, -0.5, 0, 0.5]], - dtype=torch.float32, - device=z.device) - box @= convert_matrix - return (box, score) - - -class IDetect(nn.Module): - stride = None # strides computed during build - export = False # onnx export - end2end = False - include_nms = False - concat = False - - def __init__(self, nc=80, anchors=(), ch=()): # detection layer - super(IDetect, self).__init__() - self.nc = nc # number of classes - self.no = nc + 5 # number of outputs per anchor - self.nl = len(anchors) # number of detection layers - self.na = len(anchors[0]) // 2 # number of anchors - self.grid = [torch.zeros(1)] * self.nl # init grid - a = torch.tensor(anchors).float().view(self.nl, -1, 2) - self.register_buffer('anchors', a) # shape(nl,na,2) - self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2) - self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv - - self.ia = nn.ModuleList(ImplicitA(x) for x in ch) - self.im = nn.ModuleList(ImplicitM(self.no * self.na) for _ in ch) - - def forward(self, x): - # x = x.copy() # for profiling - z = [] # inference output - self.training |= self.export - for i in range(self.nl): - x[i] = self.m[i](self.ia[i](x[i])) # conv - x[i] = self.im[i](x[i]) - bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - - if not self.training: # inference - if self.grid[i].shape[2:4] != x[i].shape[2:4]: - self.grid[i] = self._make_grid(nx, ny).to(x[i].device) - - y = x[i].sigmoid() - y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy - y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - z.append(y.view(bs, -1, self.no)) - - return x if self.training else (torch.cat(z, 1), x) - - def fuseforward(self, x): - # x = x.copy() # for profiling - z = [] # inference output - self.training |= self.export - for i in range(self.nl): - x[i] = self.m[i](x[i]) # conv - bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - - if not self.training: # inference - if self.grid[i].shape[2:4] != x[i].shape[2:4]: - self.grid[i] = self._make_grid(nx, ny).to(x[i].device) - - y = x[i].sigmoid() - if not torch.onnx.is_in_onnx_export(): - y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy - y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - else: - xy, wh, conf = y.split((2, 2, self.nc + 1), 4) # y.tensor_split((2, 4, 5), 4) # torch 1.8.0 - xy = xy * (2. * self.stride[i]) + (self.stride[i] * (self.grid[i] - 0.5)) # new xy - wh = wh ** 2 * (4 * self.anchor_grid[i].data) # new wh - y = torch.cat((xy, wh, conf), 4) - z.append(y.view(bs, -1, self.no)) - - if self.training: - out = x - elif self.end2end: - out = torch.cat(z, 1) - elif self.include_nms: - z = self.convert(z) - out = (z, ) - elif self.concat: - out = torch.cat(z, 1) - else: - out = (torch.cat(z, 1), x) - - return out - - def fuse(self): - print("IDetect.fuse") - # fuse ImplicitA and Convolution - for i in range(len(self.m)): - c1,c2,_,_ = self.m[i].weight.shape - c1_,c2_, _,_ = self.ia[i].implicit.shape - self.m[i].bias += torch.matmul(self.m[i].weight.reshape(c1,c2),self.ia[i].implicit.reshape(c2_,c1_)).squeeze(1) - - # fuse ImplicitM and Convolution - for i in range(len(self.m)): - c1,c2, _,_ = self.im[i].implicit.shape - self.m[i].bias *= self.im[i].implicit.reshape(c2) - self.m[i].weight *= self.im[i].implicit.transpose(0,1) - - @staticmethod - def _make_grid(nx=20, ny=20): - yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)]) - return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float() - - def convert(self, z): - z = torch.cat(z, 1) - box = z[:, :, :4] - conf = z[:, :, 4:5] - score = z[:, :, 5:] - score *= conf - convert_matrix = torch.tensor([[1, 0, 1, 0], [0, 1, 0, 1], [-0.5, 0, 0.5, 0], [0, -0.5, 0, 0.5]], - dtype=torch.float32, - device=z.device) - box @= convert_matrix - return (box, score) - - -class IKeypoint(nn.Module): - stride = None # strides computed during build - export = False # onnx export - - def __init__(self, nc=80, anchors=(), nkpt=17, ch=(), inplace=True, dw_conv_kpt=False): # detection layer - super(IKeypoint, self).__init__() - self.nc = nc # number of classes - self.nkpt = nkpt - self.dw_conv_kpt = dw_conv_kpt - self.no_det=(nc + 5) # number of outputs per anchor for box and class - self.no_kpt = 3*self.nkpt ## number of outputs per anchor for keypoints - self.no = self.no_det+self.no_kpt - self.nl = len(anchors) # number of detection layers - self.na = len(anchors[0]) // 2 # number of anchors - self.grid = [torch.zeros(1)] * self.nl # init grid - self.flip_test = False - a = torch.tensor(anchors).float().view(self.nl, -1, 2) - self.register_buffer('anchors', a) # shape(nl,na,2) - self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2) - self.m = nn.ModuleList(nn.Conv2d(x, self.no_det * self.na, 1) for x in ch) # output conv - - self.ia = nn.ModuleList(ImplicitA(x) for x in ch) - self.im = nn.ModuleList(ImplicitM(self.no_det * self.na) for _ in ch) - - if self.nkpt is not None: - if self.dw_conv_kpt: #keypoint head is slightly more complex - self.m_kpt = nn.ModuleList( - nn.Sequential(DWConv(x, x, k=3), Conv(x,x), - DWConv(x, x, k=3), Conv(x, x), - DWConv(x, x, k=3), Conv(x,x), - DWConv(x, x, k=3), Conv(x, x), - DWConv(x, x, k=3), Conv(x, x), - DWConv(x, x, k=3), nn.Conv2d(x, self.no_kpt * self.na, 1)) for x in ch) - else: #keypoint head is a single convolution - self.m_kpt = nn.ModuleList(nn.Conv2d(x, self.no_kpt * self.na, 1) for x in ch) - - self.inplace = inplace # use in-place ops (e.g. slice assignment) - - def forward(self, x): - # x = x.copy() # for profiling - z = [] # inference output - self.training |= self.export - for i in range(self.nl): - if self.nkpt is None or self.nkpt==0: - x[i] = self.im[i](self.m[i](self.ia[i](x[i]))) # conv - else : - x[i] = torch.cat((self.im[i](self.m[i](self.ia[i](x[i]))), self.m_kpt[i](x[i])), axis=1) - - bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - x_det = x[i][..., :6] - x_kpt = x[i][..., 6:] - - if not self.training: # inference - if self.grid[i].shape[2:4] != x[i].shape[2:4]: - self.grid[i] = self._make_grid(nx, ny).to(x[i].device) - kpt_grid_x = self.grid[i][..., 0:1] - kpt_grid_y = self.grid[i][..., 1:2] - - if self.nkpt == 0: - y = x[i].sigmoid() - else: - y = x_det.sigmoid() - - if self.inplace: - xy = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy - wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i].view(1, self.na, 1, 1, 2) # wh - if self.nkpt != 0: - x_kpt[..., 0::3] = (x_kpt[..., ::3] * 2. - 0.5 + kpt_grid_x.repeat(1,1,1,1,17)) * self.stride[i] # xy - x_kpt[..., 1::3] = (x_kpt[..., 1::3] * 2. - 0.5 + kpt_grid_y.repeat(1,1,1,1,17)) * self.stride[i] # xy - #x_kpt[..., 0::3] = (x_kpt[..., ::3] + kpt_grid_x.repeat(1,1,1,1,17)) * self.stride[i] # xy - #x_kpt[..., 1::3] = (x_kpt[..., 1::3] + kpt_grid_y.repeat(1,1,1,1,17)) * self.stride[i] # xy - #print('=============') - #print(self.anchor_grid[i].shape) - #print(self.anchor_grid[i][...,0].unsqueeze(4).shape) - #print(x_kpt[..., 0::3].shape) - #x_kpt[..., 0::3] = ((x_kpt[..., 0::3].tanh() * 2.) ** 3 * self.anchor_grid[i][...,0].unsqueeze(4).repeat(1,1,1,1,self.nkpt)) + kpt_grid_x.repeat(1,1,1,1,17) * self.stride[i] # xy - #x_kpt[..., 1::3] = ((x_kpt[..., 1::3].tanh() * 2.) ** 3 * self.anchor_grid[i][...,1].unsqueeze(4).repeat(1,1,1,1,self.nkpt)) + kpt_grid_y.repeat(1,1,1,1,17) * self.stride[i] # xy - #x_kpt[..., 0::3] = (((x_kpt[..., 0::3].sigmoid() * 4.) ** 2 - 8.) * self.anchor_grid[i][...,0].unsqueeze(4).repeat(1,1,1,1,self.nkpt)) + kpt_grid_x.repeat(1,1,1,1,17) * self.stride[i] # xy - #x_kpt[..., 1::3] = (((x_kpt[..., 1::3].sigmoid() * 4.) ** 2 - 8.) * self.anchor_grid[i][...,1].unsqueeze(4).repeat(1,1,1,1,self.nkpt)) + kpt_grid_y.repeat(1,1,1,1,17) * self.stride[i] # xy - x_kpt[..., 2::3] = x_kpt[..., 2::3].sigmoid() - - y = torch.cat((xy, wh, y[..., 4:], x_kpt), dim = -1) - - else: # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953 - xy = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy - wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - if self.nkpt != 0: - y[..., 6:] = (y[..., 6:] * 2. - 0.5 + self.grid[i].repeat((1,1,1,1,self.nkpt))) * self.stride[i] # xy - y = torch.cat((xy, wh, y[..., 4:]), -1) - - z.append(y.view(bs, -1, self.no)) - - return x if self.training else (torch.cat(z, 1), x) - - @staticmethod - def _make_grid(nx=20, ny=20): - yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)]) - return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float() - - -class IAuxDetect(nn.Module): - stride = None # strides computed during build - export = False # onnx export - end2end = False - include_nms = False - concat = False - - def __init__(self, nc=80, anchors=(), ch=()): # detection layer - super(IAuxDetect, self).__init__() - self.nc = nc # number of classes - self.no = nc + 5 # number of outputs per anchor - self.nl = len(anchors) # number of detection layers - self.na = len(anchors[0]) // 2 # number of anchors - self.grid = [torch.zeros(1)] * self.nl # init grid - a = torch.tensor(anchors).float().view(self.nl, -1, 2) - self.register_buffer('anchors', a) # shape(nl,na,2) - self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2) - self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch[:self.nl]) # output conv - self.m2 = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch[self.nl:]) # output conv - - self.ia = nn.ModuleList(ImplicitA(x) for x in ch[:self.nl]) - self.im = nn.ModuleList(ImplicitM(self.no * self.na) for _ in ch[:self.nl]) - - def forward(self, x): - # x = x.copy() # for profiling - z = [] # inference output - self.training |= self.export - for i in range(self.nl): - x[i] = self.m[i](self.ia[i](x[i])) # conv - x[i] = self.im[i](x[i]) - bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - - x[i+self.nl] = self.m2[i](x[i+self.nl]) - x[i+self.nl] = x[i+self.nl].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - - if not self.training: # inference - if self.grid[i].shape[2:4] != x[i].shape[2:4]: - self.grid[i] = self._make_grid(nx, ny).to(x[i].device) - - y = x[i].sigmoid() - if not torch.onnx.is_in_onnx_export(): - y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy - y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - else: - xy, wh, conf = y.split((2, 2, self.nc + 1), 4) # y.tensor_split((2, 4, 5), 4) # torch 1.8.0 - xy = xy * (2. * self.stride[i]) + (self.stride[i] * (self.grid[i] - 0.5)) # new xy - wh = wh ** 2 * (4 * self.anchor_grid[i].data) # new wh - y = torch.cat((xy, wh, conf), 4) - z.append(y.view(bs, -1, self.no)) - - return x if self.training else (torch.cat(z, 1), x[:self.nl]) - - def fuseforward(self, x): - # x = x.copy() # for profiling - z = [] # inference output - self.training |= self.export - for i in range(self.nl): - x[i] = self.m[i](x[i]) # conv - bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - - if not self.training: # inference - if self.grid[i].shape[2:4] != x[i].shape[2:4]: - self.grid[i] = self._make_grid(nx, ny).to(x[i].device) - - y = x[i].sigmoid() - if not torch.onnx.is_in_onnx_export(): - y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy - y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - else: - xy = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy - wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i].data # wh - y = torch.cat((xy, wh, y[..., 4:]), -1) - z.append(y.view(bs, -1, self.no)) - - if self.training: - out = x - elif self.end2end: - out = torch.cat(z, 1) - elif self.include_nms: - z = self.convert(z) - out = (z, ) - elif self.concat: - out = torch.cat(z, 1) - else: - out = (torch.cat(z, 1), x) - - return out - - def fuse(self): - print("IAuxDetect.fuse") - # fuse ImplicitA and Convolution - for i in range(len(self.m)): - c1,c2,_,_ = self.m[i].weight.shape - c1_,c2_, _,_ = self.ia[i].implicit.shape - self.m[i].bias += torch.matmul(self.m[i].weight.reshape(c1,c2),self.ia[i].implicit.reshape(c2_,c1_)).squeeze(1) - - # fuse ImplicitM and Convolution - for i in range(len(self.m)): - c1,c2, _,_ = self.im[i].implicit.shape - self.m[i].bias *= self.im[i].implicit.reshape(c2) - self.m[i].weight *= self.im[i].implicit.transpose(0,1) - - @staticmethod - def _make_grid(nx=20, ny=20): - yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)]) - return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float() - - def convert(self, z): - z = torch.cat(z, 1) - box = z[:, :, :4] - conf = z[:, :, 4:5] - score = z[:, :, 5:] - score *= conf - convert_matrix = torch.tensor([[1, 0, 1, 0], [0, 1, 0, 1], [-0.5, 0, 0.5, 0], [0, -0.5, 0, 0.5]], - dtype=torch.float32, - device=z.device) - box @= convert_matrix - return (box, score) - - -class IBin(nn.Module): - stride = None # strides computed during build - export = False # onnx export - - def __init__(self, nc=80, anchors=(), ch=(), bin_count=21): # detection layer - super(IBin, self).__init__() - self.nc = nc # number of classes - self.bin_count = bin_count - - self.w_bin_sigmoid = SigmoidBin(bin_count=self.bin_count, min=0.0, max=4.0) - self.h_bin_sigmoid = SigmoidBin(bin_count=self.bin_count, min=0.0, max=4.0) - # classes, x,y,obj - self.no = nc + 3 + \ - self.w_bin_sigmoid.get_length() + self.h_bin_sigmoid.get_length() # w-bce, h-bce - # + self.x_bin_sigmoid.get_length() + self.y_bin_sigmoid.get_length() - - self.nl = len(anchors) # number of detection layers - self.na = len(anchors[0]) // 2 # number of anchors - self.grid = [torch.zeros(1)] * self.nl # init grid - a = torch.tensor(anchors).float().view(self.nl, -1, 2) - self.register_buffer('anchors', a) # shape(nl,na,2) - self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2) - self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv - - self.ia = nn.ModuleList(ImplicitA(x) for x in ch) - self.im = nn.ModuleList(ImplicitM(self.no * self.na) for _ in ch) - - def forward(self, x): - - #self.x_bin_sigmoid.use_fw_regression = True - #self.y_bin_sigmoid.use_fw_regression = True - self.w_bin_sigmoid.use_fw_regression = True - self.h_bin_sigmoid.use_fw_regression = True - - # x = x.copy() # for profiling - z = [] # inference output - self.training |= self.export - for i in range(self.nl): - x[i] = self.m[i](self.ia[i](x[i])) # conv - x[i] = self.im[i](x[i]) - bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - - if not self.training: # inference - if self.grid[i].shape[2:4] != x[i].shape[2:4]: - self.grid[i] = self._make_grid(nx, ny).to(x[i].device) - - y = x[i].sigmoid() - y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy - #y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - - - #px = (self.x_bin_sigmoid.forward(y[..., 0:12]) + self.grid[i][..., 0]) * self.stride[i] - #py = (self.y_bin_sigmoid.forward(y[..., 12:24]) + self.grid[i][..., 1]) * self.stride[i] - - pw = self.w_bin_sigmoid.forward(y[..., 2:24]) * self.anchor_grid[i][..., 0] - ph = self.h_bin_sigmoid.forward(y[..., 24:46]) * self.anchor_grid[i][..., 1] - - #y[..., 0] = px - #y[..., 1] = py - y[..., 2] = pw - y[..., 3] = ph - - y = torch.cat((y[..., 0:4], y[..., 46:]), dim=-1) - - z.append(y.view(bs, -1, y.shape[-1])) - - return x if self.training else (torch.cat(z, 1), x) - - @staticmethod - def _make_grid(nx=20, ny=20): - yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)]) - return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float() - - -class Model(nn.Module): - def __init__(self, cfg='yolor-csp-c.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes - super(Model, self).__init__() - self.traced = False - if isinstance(cfg, dict): - self.yaml = cfg # model dict - else: # is *.yaml - import yaml # for torch hub - self.yaml_file = Path(cfg).name - with open(cfg) as f: - self.yaml = yaml.load(f, Loader=yaml.SafeLoader) # model dict - - # Define model - ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels - if nc and nc != self.yaml['nc']: - logger.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}") - self.yaml['nc'] = nc # override yaml value - if anchors: - logger.info(f'Overriding model.yaml anchors with anchors={anchors}') - self.yaml['anchors'] = round(anchors) # override yaml value - self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist - self.names = [str(i) for i in range(self.yaml['nc'])] # default names - # print([x.shape for x in self.forward(torch.zeros(1, ch, 64, 64))]) - - # Build strides, anchors - m = self.model[-1] # Detect() - if isinstance(m, Detect): - s = 256 # 2x min stride - m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward - check_anchor_order(m) - m.anchors /= m.stride.view(-1, 1, 1) - self.stride = m.stride - self._initialize_biases() # only run once - # print('Strides: %s' % m.stride.tolist()) - if isinstance(m, IDetect): - s = 256 # 2x min stride - m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward - check_anchor_order(m) - m.anchors /= m.stride.view(-1, 1, 1) - self.stride = m.stride - self._initialize_biases() # only run once - # print('Strides: %s' % m.stride.tolist()) - if isinstance(m, IAuxDetect): - s = 256 # 2x min stride - m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))[:4]]) # forward - #print(m.stride) - check_anchor_order(m) - m.anchors /= m.stride.view(-1, 1, 1) - self.stride = m.stride - self._initialize_aux_biases() # only run once - # print('Strides: %s' % m.stride.tolist()) - if isinstance(m, IBin): - s = 256 # 2x min stride - m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward - check_anchor_order(m) - m.anchors /= m.stride.view(-1, 1, 1) - self.stride = m.stride - self._initialize_biases_bin() # only run once - # print('Strides: %s' % m.stride.tolist()) - if isinstance(m, IKeypoint): - s = 256 # 2x min stride - m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward - check_anchor_order(m) - m.anchors /= m.stride.view(-1, 1, 1) - self.stride = m.stride - self._initialize_biases_kpt() # only run once - # print('Strides: %s' % m.stride.tolist()) - - # Init weights, biases - initialize_weights(self) - self.info() - logger.info('') - - def forward(self, x, augment=False, profile=False): - if augment: - img_size = x.shape[-2:] # height, width - s = [1, 0.83, 0.67] # scales - f = [None, 3, None] # flips (2-ud, 3-lr) - y = [] # outputs - for si, fi in zip(s, f): - xi = scale_img(x.flip(fi) if fi else x, si, gs=int(self.stride.max())) - yi = self.forward_once(xi)[0] # forward - # cv2.imwrite(f'img_{si}.jpg', 255 * xi[0].cpu().numpy().transpose((1, 2, 0))[:, :, ::-1]) # save - yi[..., :4] /= si # de-scale - if fi == 2: - yi[..., 1] = img_size[0] - yi[..., 1] # de-flip ud - elif fi == 3: - yi[..., 0] = img_size[1] - yi[..., 0] # de-flip lr - y.append(yi) - return torch.cat(y, 1), None # augmented inference, train - else: - return self.forward_once(x, profile) # single-scale inference, train - - def forward_once(self, x, profile=False): - y, dt = [], [] # outputs - for m in self.model: - if m.f != -1: # if not from previous layer - x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers - - if not hasattr(self, 'traced'): - self.traced=False - - if self.traced: - if isinstance(m, Detect) or isinstance(m, IDetect) or isinstance(m, IAuxDetect) or isinstance(m, IKeypoint): - break - - if profile: - c = isinstance(m, (Detect, IDetect, IAuxDetect, IBin)) - o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPS - for _ in range(10): - m(x.copy() if c else x) - t = time_synchronized() - for _ in range(10): - m(x.copy() if c else x) - dt.append((time_synchronized() - t) * 100) - print('%10.1f%10.0f%10.1fms %-40s' % (o, m.np, dt[-1], m.type)) - - x = m(x) # run - - y.append(x if m.i in self.save else None) # save output - - if profile: - print('%.1fms total' % sum(dt)) - return x - - def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency - # https://arxiv.org/abs/1708.02002 section 3.3 - # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. - m = self.model[-1] # Detect() module - for mi, s in zip(m.m, m.stride): # from - b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) - b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) - b.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls - mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) - - def _initialize_aux_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency - # https://arxiv.org/abs/1708.02002 section 3.3 - # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. - m = self.model[-1] # Detect() module - for mi, mi2, s in zip(m.m, m.m2, m.stride): # from - b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) - b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) - b.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls - mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) - b2 = mi2.bias.view(m.na, -1) # conv.bias(255) to (3,85) - b2.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) - b2.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls - mi2.bias = torch.nn.Parameter(b2.view(-1), requires_grad=True) - - def _initialize_biases_bin(self, cf=None): # initialize biases into Detect(), cf is class frequency - # https://arxiv.org/abs/1708.02002 section 3.3 - # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. - m = self.model[-1] # Bin() module - bc = m.bin_count - for mi, s in zip(m.m, m.stride): # from - b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) - old = b[:, (0,1,2,bc+3)].data - obj_idx = 2*bc+4 - b[:, :obj_idx].data += math.log(0.6 / (bc + 1 - 0.99)) - b[:, obj_idx].data += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) - b[:, (obj_idx+1):].data += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls - b[:, (0,1,2,bc+3)].data = old - mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) - - def _initialize_biases_kpt(self, cf=None): # initialize biases into Detect(), cf is class frequency - # https://arxiv.org/abs/1708.02002 section 3.3 - # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. - m = self.model[-1] # Detect() module - for mi, s in zip(m.m, m.stride): # from - b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) - b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) - b.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls - mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) - - def _print_biases(self): - m = self.model[-1] # Detect() module - for mi in m.m: # from - b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85) - print(('%6g Conv2d.bias:' + '%10.3g' * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean())) - - # def _print_weights(self): - # for m in self.model.modules(): - # if type(m) is Bottleneck: - # print('%10.3g' % (m.w.detach().sigmoid() * 2)) # shortcut weights - - def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers - print('Fusing layers... ') - for m in self.model.modules(): - if isinstance(m, RepConv): - #print(f" fuse_repvgg_block") - m.fuse_repvgg_block() - elif isinstance(m, RepConv_OREPA): - #print(f" switch_to_deploy") - m.switch_to_deploy() - elif type(m) is Conv and hasattr(m, 'bn'): - m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv - delattr(m, 'bn') # remove batchnorm - m.forward = m.fuseforward # update forward - elif isinstance(m, (IDetect, IAuxDetect)): - m.fuse() - m.forward = m.fuseforward - self.info() - return self - - def nms(self, mode=True): # add or remove NMS module - present = type(self.model[-1]) is NMS # last layer is NMS - if mode and not present: - print('Adding NMS... ') - m = NMS() # module - m.f = -1 # from - m.i = self.model[-1].i + 1 # index - self.model.add_module(name='%s' % m.i, module=m) # add - self.eval() - elif not mode and present: - print('Removing NMS... ') - self.model = self.model[:-1] # remove - return self - - def autoshape(self): # add autoShape module - print('Adding autoShape... ') - m = autoShape(self) # wrap model - copy_attr(m, self, include=('yaml', 'nc', 'hyp', 'names', 'stride'), exclude=()) # copy attributes - return m - - def info(self, verbose=False, img_size=640): # print model information - model_info(self, verbose, img_size) - - -def parse_model(d, ch): # model_dict, input_channels(3) - logger.info('\n%3s%18s%3s%10s %-40s%-30s' % ('', 'from', 'n', 'params', 'module', 'arguments')) - anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple'] - na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors - no = na * (nc + 5) # number of outputs = anchors * (classes + 5) - - layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out - for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args - m = eval(m) if isinstance(m, str) else m # eval strings - for j, a in enumerate(args): - try: - args[j] = eval(a) if isinstance(a, str) else a # eval strings - except: - pass - - n = max(round(n * gd), 1) if n > 1 else n # depth gain - if m in [nn.Conv2d, Conv, RobustConv, RobustConv2, DWConv, GhostConv, RepConv, RepConv_OREPA, DownC, - SPP, SPPF, SPPCSPC, GhostSPPCSPC, MixConv2d, Focus, Stem, GhostStem, CrossConv, - Bottleneck, BottleneckCSPA, BottleneckCSPB, BottleneckCSPC, - RepBottleneck, RepBottleneckCSPA, RepBottleneckCSPB, RepBottleneckCSPC, - Res, ResCSPA, ResCSPB, ResCSPC, - RepRes, RepResCSPA, RepResCSPB, RepResCSPC, - ResX, ResXCSPA, ResXCSPB, ResXCSPC, - RepResX, RepResXCSPA, RepResXCSPB, RepResXCSPC, - Ghost, GhostCSPA, GhostCSPB, GhostCSPC, - SwinTransformerBlock, STCSPA, STCSPB, STCSPC, - SwinTransformer2Block, ST2CSPA, ST2CSPB, ST2CSPC]: - c1, c2 = ch[f], args[0] - if c2 != no: # if not output - c2 = make_divisible(c2 * gw, 8) - - args = [c1, c2, *args[1:]] - if m in [DownC, SPPCSPC, GhostSPPCSPC, - BottleneckCSPA, BottleneckCSPB, BottleneckCSPC, - RepBottleneckCSPA, RepBottleneckCSPB, RepBottleneckCSPC, - ResCSPA, ResCSPB, ResCSPC, - RepResCSPA, RepResCSPB, RepResCSPC, - ResXCSPA, ResXCSPB, ResXCSPC, - RepResXCSPA, RepResXCSPB, RepResXCSPC, - GhostCSPA, GhostCSPB, GhostCSPC, - STCSPA, STCSPB, STCSPC, - ST2CSPA, ST2CSPB, ST2CSPC]: - args.insert(2, n) # number of repeats - n = 1 - elif m is nn.BatchNorm2d: - args = [ch[f]] - elif m is Concat: - c2 = sum([ch[x] for x in f]) - elif m is Chuncat: - c2 = sum([ch[x] for x in f]) - elif m is Shortcut: - c2 = ch[f[0]] - elif m is Foldcut: - c2 = ch[f] // 2 - elif m in [Detect, IDetect, IAuxDetect, IBin, IKeypoint]: - args.append([ch[x] for x in f]) - if isinstance(args[1], int): # number of anchors - args[1] = [list(range(args[1] * 2))] * len(f) - elif m is ReOrg: - c2 = ch[f] * 4 - elif m is Contract: - c2 = ch[f] * args[0] ** 2 - elif m is Expand: - c2 = ch[f] // args[0] ** 2 - else: - c2 = ch[f] - - m_ = nn.Sequential(*[m(*args) for _ in range(n)]) if n > 1 else m(*args) # module - t = str(m)[8:-2].replace('__main__.', '') # module type - np = sum([x.numel() for x in m_.parameters()]) # number params - m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params - logger.info('%3s%18s%3s%10.0f %-40s%-30s' % (i, f, n, np, t, args)) # print - save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist - layers.append(m_) - if i == 0: - ch = [] - ch.append(c2) - return nn.Sequential(*layers), sorted(save) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--cfg', type=str, default='yolor-csp-c.yaml', help='model.yaml') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--profile', action='store_true', help='profile model speed') - opt = parser.parse_args() - opt.cfg = check_file(opt.cfg) # check file - set_logging() - device = select_device(opt.device) - - # Create model - model = Model(opt.cfg).to(device) - model.train() - - if opt.profile: - img = torch.rand(1, 3, 640, 640).to(device) - y = model(img, profile=True) - - # Profile - # img = torch.rand(8 if torch.cuda.is_available() else 1, 3, 640, 640).to(device) - # y = model(img, profile=True) - - # Tensorboard - # from torch.utils.tensorboard import SummaryWriter - # tb_writer = SummaryWriter() - # print("Run 'tensorboard --logdir=models/runs' to view tensorboard at http://localhost:6006/") - # tb_writer.add_graph(model.model, img) # add model to tensorboard - # tb_writer.add_image('test', img[0], dataformats='CWH') # add model to tensorboard diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/alt_diffusion/__init__.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/alt_diffusion/__init__.py deleted file mode 100644 index c0ad3b4a3486d4d54aa68b1bf6b74f8c387f7f6a..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/alt_diffusion/__init__.py +++ /dev/null @@ -1,52 +0,0 @@ -from typing import TYPE_CHECKING - -from ...utils import ( - OptionalDependencyNotAvailable, - _LazyModule, - get_objects_from_module, - is_torch_available, - is_transformers_available, -) - - -_dummy_objects = {} -_import_structure = {} - -try: - if not (is_transformers_available() and is_torch_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ...utils import dummy_torch_and_transformers_objects - - _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects)) -else: - _import_structure["modeling_roberta_series"] = ["RobertaSeriesModelWithTransformation"] - _import_structure["pipeline_alt_diffusion"] = ["AltDiffusionPipeline"] - _import_structure["pipeline_alt_diffusion_img2img"] = ["AltDiffusionImg2ImgPipeline"] - - _import_structure["pipeline_output"] = ["AltDiffusionPipelineOutput"] - -if TYPE_CHECKING: - try: - if not (is_transformers_available() and is_torch_available()): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - from ...utils.dummy_torch_and_transformers_objects import * - - else: - from .modeling_roberta_series import RobertaSeriesModelWithTransformation - from .pipeline_alt_diffusion import AltDiffusionPipeline - from .pipeline_alt_diffusion_img2img import AltDiffusionImg2ImgPipeline - from .pipeline_output import AltDiffusionPipelineOutput - -else: - import sys - - sys.modules[__name__] = _LazyModule( - __name__, - globals()["__file__"], - _import_structure, - module_spec=__spec__, - ) - for name, value in _dummy_objects.items(): - setattr(sys.modules[__name__], name, value) diff --git a/spaces/parkyzh/bingo/src/components/chat-attachments.tsx b/spaces/parkyzh/bingo/src/components/chat-attachments.tsx deleted file mode 100644 index ef43d4e262935d263b6099138c56f7daade5299d..0000000000000000000000000000000000000000 --- a/spaces/parkyzh/bingo/src/components/chat-attachments.tsx +++ /dev/null @@ -1,37 +0,0 @@ -import Image from 'next/image' -import ClearIcon from '@/assets/images/clear.svg' -import RefreshIcon from '@/assets/images/refresh.svg' -import { FileItem } from '@/lib/bots/bing/types' -import { cn } from '@/lib/utils' -import { useBing } from '@/lib/hooks/use-bing' - -type ChatAttachmentsProps = Pick, 'attachmentList' | 'setAttachmentList' | 'uploadImage'> - -export function ChatAttachments({ attachmentList = [], setAttachmentList, uploadImage }: ChatAttachmentsProps) { - return attachmentList.length ? ( -
                - {attachmentList.map(file => ( -
                - {file.status === 'loading' && ( -
                -
                -
                ) - } - {file.status !== 'error' && ( -
                - -
                ) - } - {file.status === 'error' && ( -
                - refresh uploadImage(file.url)} /> -
                - )} - -
                - ))} -
                - ) : null -} diff --git a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/transforms/__init__.py b/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/transforms/__init__.py deleted file mode 100644 index c97a726db1b1f102fa6fe222bde48eef07fa7043..0000000000000000000000000000000000000000 --- a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/transforms/__init__.py +++ /dev/null @@ -1,76 +0,0 @@ -from pathlib import Path -from typing import Callable, List, Optional, Tuple - -from monai.transforms import Compose - -from transforms.base import get_image_loading_transform, get_apply_crop_transform, get_stacking_transform -from transforms.mask import get_mask_transform -from transforms.coordinates import get_normalized_coordinates_transform -from transforms.augmentation import * -from transforms.backbone import * - - -def _build_transforms_composition(hparams, transform_getters: List[Callable], *initial_args) -> Tuple[Compose, List[str]]: - """ - Builds a transforms composition from the given functions, which take the hparams and loaded keys as arguments, and - produce a Compose containing the desired transforms. The initialization function receives the provided initial arguments. - """ - transforms = [] - keys = [] - - for i in range(0, len(transform_getters)): - if len(keys) == 0: - assert i == 0, f"Function {transform_getters[i]} did not yield any loaded keys." - # initialize - transform, keys = transform_getters[0](hparams, *initial_args) - else: - transform, keys = transform_getters[i](hparams, keys) - transforms.append(transform) - - return Compose(transforms), keys - -def _get_config_transform_by_name(transform_name: str) -> Callable: - if transform_name == "intensity": - return intensity_transform - elif transform_name.startswith("spatial3d"): - if "simple" in transform_name: - return lambda hparams, loaded_keys: spatial_transform(hparams, loaded_keys, mode='simple') - else: - return lambda hparams, loaded_keys: spatial_transform(hparams, loaded_keys, mode='default') - elif transform_name == "modelsgenesis": - return models_genesis_transform - elif transform_name == "pretrained_resnet": - return pretrained_resnet_transform - elif transform_name == "robustness": - return robustness_transform - else: - raise ValueError(f"Unknown transform: {transform_name}") - -def get_training_transforms(hparams, image_dir: Path, mask_dir: Optional[Path] = None) -> Compose: - transforms_base = [get_image_loading_transform, get_mask_transform] - - # robustness has to run early as we may need to operate on the whole volume for affine transformation and padding, - # which must occur prior to any cropping or normalization - if "robustness" in hparams.transforms: transforms_base.append(_get_config_transform_by_name("robustness")) - - transforms_base.extend([get_apply_crop_transform, get_normalized_coordinates_transform]) - - # preprocessing transforms must be run first - preprocessing_transforms = ["modelsgenesis", "pretrained_resnet"] - config_transforms = [_get_config_transform_by_name(transform_name) for transform_name in hparams.transforms if transform_name in preprocessing_transforms] - - # then append the rest minus the robustness transform that is run earlier - exclusion_criterion = lambda transform_name: transform_name in preprocessing_transforms or transform_name == "robustness" - config_transforms.extend([_get_config_transform_by_name(transform_name) for transform_name in hparams.transforms if not exclusion_criterion]) - - # the stacking transform must not occur before config transforms are run to avoid any interference - return _build_transforms_composition(hparams, transforms_base + config_transforms + [get_stacking_transform], image_dir, mask_dir)[0] - -def get_base_transforms(hparams, image_dir: Path, mask_dir: Optional[Path] = None) -> Compose: - transforms_base = [get_image_loading_transform, get_mask_transform, get_apply_crop_transform, get_normalized_coordinates_transform] - - # apply preprocessing transforms - preprocessing_transforms = ["modelsgenesis", "pretrained_resnet"] - config_transforms = [_get_config_transform_by_name(transform_name) for transform_name in hparams.transforms if transform_name in preprocessing_transforms] - - return _build_transforms_composition(hparams, transforms_base + config_transforms + [get_stacking_transform], image_dir, mask_dir)[0] \ No newline at end of file diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/distributions/base.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/distributions/base.py deleted file mode 100644 index 75ce2dc9057a20a957abe2fbd4ef094dc4196684..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/distributions/base.py +++ /dev/null @@ -1,39 +0,0 @@ -import abc - -from pip._internal.index.package_finder import PackageFinder -from pip._internal.metadata.base import BaseDistribution -from pip._internal.req import InstallRequirement - - -class AbstractDistribution(metaclass=abc.ABCMeta): - """A base class for handling installable artifacts. - - The requirements for anything installable are as follows: - - - we must be able to determine the requirement name - (or we can't correctly handle the non-upgrade case). - - - for packages with setup requirements, we must also be able - to determine their requirements without installing additional - packages (for the same reason as run-time dependencies) - - - we must be able to create a Distribution object exposing the - above metadata. - """ - - def __init__(self, req: InstallRequirement) -> None: - super().__init__() - self.req = req - - @abc.abstractmethod - def get_metadata_distribution(self) -> BaseDistribution: - raise NotImplementedError() - - @abc.abstractmethod - def prepare_distribution_metadata( - self, - finder: PackageFinder, - build_isolation: bool, - check_build_deps: bool, - ) -> None: - raise NotImplementedError() diff --git a/spaces/posak/Tune-A-Video-Training-UI/app.py b/spaces/posak/Tune-A-Video-Training-UI/app.py deleted file mode 100644 index 3e0b9a282fc42c71e6c0f8d7f238a79a9c53c697..0000000000000000000000000000000000000000 --- a/spaces/posak/Tune-A-Video-Training-UI/app.py +++ /dev/null @@ -1,84 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import os -from subprocess import getoutput - -import gradio as gr -import torch - -from app_inference import create_inference_demo -from app_training import create_training_demo -from app_upload import create_upload_demo -from inference import InferencePipeline -from trainer import Trainer - -TITLE = '# [Tune-A-Video](https://tuneavideo.github.io/) UI' - -ORIGINAL_SPACE_ID = 'Tune-A-Video-library/Tune-A-Video-Training-UI' -SPACE_ID = os.getenv('SPACE_ID', ORIGINAL_SPACE_ID) -GPU_DATA = getoutput('nvidia-smi') -SHARED_UI_WARNING = f'''## Attention - Training doesn't work in this shared UI. You can duplicate and use it with a paid private T4 GPU. - -
                Duplicate Space
                -''' - -if os.getenv('SYSTEM') == 'spaces' and SPACE_ID != ORIGINAL_SPACE_ID: - SETTINGS = f'Settings' -else: - SETTINGS = 'Settings' - -INVALID_GPU_WARNING = f'''## Attention - the specified GPU is invalid. Training may not work. Make sure you have selected a `T4 GPU` for this task.''' - -CUDA_NOT_AVAILABLE_WARNING = f'''## Attention - Running on CPU. -
                -You can assign a GPU in the {SETTINGS} tab if you are running this on HF Spaces. -You can use "T4 small/medium" to run this demo. -
                -''' - -HF_TOKEN_NOT_SPECIFIED_WARNING = f'''The environment variable `HF_TOKEN` is not specified. Feel free to specify your Hugging Face token with write permission if you don't want to manually provide it for every run. -
                -You can check and create your Hugging Face tokens here. -You can specify environment variables in the "Repository secrets" section of the {SETTINGS} tab. -
                -''' - -HF_TOKEN = os.getenv('HF_TOKEN') - - -def show_warning(warning_text: str) -> gr.Blocks: - with gr.Blocks() as demo: - with gr.Box(): - gr.Markdown(warning_text) - return demo - - -pipe = InferencePipeline(HF_TOKEN) -trainer = Trainer(HF_TOKEN) - -with gr.Blocks(css='style.css') as demo: - if SPACE_ID == ORIGINAL_SPACE_ID: - show_warning(SHARED_UI_WARNING) - elif not torch.cuda.is_available(): - show_warning(CUDA_NOT_AVAILABLE_WARNING) - elif (not 'T4' in GPU_DATA): - show_warning(INVALID_GPU_WARNING) - - gr.Markdown(TITLE) - with gr.Tabs(): - with gr.TabItem('Train'): - create_training_demo(trainer, pipe) - with gr.TabItem('Run'): - create_inference_demo(pipe, HF_TOKEN) - with gr.TabItem('Upload'): - gr.Markdown(''' - - You can use this tab to upload models later if you choose not to upload models in training time or if upload in training time failed. - ''') - create_upload_demo(HF_TOKEN) - - if not HF_TOKEN: - show_warning(HF_TOKEN_NOT_SPECIFIED_WARNING) - -demo.queue(max_size=1).launch(share=False) diff --git a/spaces/prerna9811/Chord/portaudio/include/pa_mac_core.h b/spaces/prerna9811/Chord/portaudio/include/pa_mac_core.h deleted file mode 100644 index beb539619a19e5025b6874fc5cbdc0b0b704557b..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/include/pa_mac_core.h +++ /dev/null @@ -1,191 +0,0 @@ -#ifndef PA_MAC_CORE_H -#define PA_MAC_CORE_H -/* - * PortAudio Portable Real-Time Audio Library - * Macintosh Core Audio specific extensions - * portaudio.h should be included before this file. - * - * Copyright (c) 2005-2006 Bjorn Roche - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -/** @file - * @ingroup public_header - * @brief CoreAudio-specific PortAudio API extension header file. - */ - -#include "portaudio.h" - -#include -#include - -#ifdef __cplusplus -extern "C" { -#endif - - -/** - * A pointer to a paMacCoreStreamInfo may be passed as - * the hostApiSpecificStreamInfo in the PaStreamParameters struct - * when opening a stream or querying the format. Use NULL, for the - * defaults. Note that for duplex streams, flags for input and output - * should be the same or behaviour is undefined. - */ -typedef struct -{ - unsigned long size; /**size of whole structure including this header */ - PaHostApiTypeId hostApiType; /**host API for which this data is intended */ - unsigned long version; /**structure version */ - unsigned long flags; /** flags to modify behaviour */ - SInt32 const * channelMap; /** Channel map for HAL channel mapping , if not needed, use NULL;*/ - unsigned long channelMapSize; /** Channel map size for HAL channel mapping , if not needed, use 0;*/ -} PaMacCoreStreamInfo; - -/** - * Functions - */ - - -/** Use this function to initialize a paMacCoreStreamInfo struct - * using the requested flags. Note that channel mapping is turned - * off after a call to this function. - * @param data The datastructure to initialize - * @param flags The flags to initialize the datastructure with. -*/ -void PaMacCore_SetupStreamInfo( PaMacCoreStreamInfo *data, unsigned long flags ); - -/** call this after pa_SetupMacCoreStreamInfo to use channel mapping as described in notes.txt. - * @param data The stream info structure to assign a channel mapping to - * @param channelMap The channel map array, as described in notes.txt. This array pointer will be used directly (ie the underlying data will not be copied), so the caller should not free the array until after the stream has been opened. - * @param channelMapSize The size of the channel map array. - */ -void PaMacCore_SetupChannelMap( PaMacCoreStreamInfo *data, const SInt32 * const channelMap, unsigned long channelMapSize ); - -/** - * Retrieve the AudioDeviceID of the input device assigned to an open stream - * - * @param s The stream to query. - * - * @return A valid AudioDeviceID, or NULL if an error occurred. - */ -AudioDeviceID PaMacCore_GetStreamInputDevice( PaStream* s ); - -/** - * Retrieve the AudioDeviceID of the output device assigned to an open stream - * - * @param s The stream to query. - * - * @return A valid AudioDeviceID, or NULL if an error occurred. - */ -AudioDeviceID PaMacCore_GetStreamOutputDevice( PaStream* s ); - -/** - * Returns a statically allocated string with the device's name - * for the given channel. NULL will be returned on failure. - * - * This function's implementation is not complete! - * - * @param device The PortAudio device index. - * @param channel The channel number who's name is requested. - * @return a statically allocated string with the name of the device. - * Because this string is statically allocated, it must be - * copied if it is to be saved and used by the user after - * another call to this function. - * - */ -const char *PaMacCore_GetChannelName( int device, int channelIndex, bool input ); - - -/** Retrieve the range of legal native buffer sizes for the specified device, in sample frames. - - @param device The global index of the PortAudio device about which the query is being made. - @param minBufferSizeFrames A pointer to the location which will receive the minimum buffer size value. - @param maxBufferSizeFrames A pointer to the location which will receive the maximum buffer size value. - - @see kAudioDevicePropertyBufferFrameSizeRange in the CoreAudio SDK. - */ -PaError PaMacCore_GetBufferSizeRange( PaDeviceIndex device, - long *minBufferSizeFrames, long *maxBufferSizeFrames ); - - -/** - * Flags - */ - -/** - * The following flags alter the behaviour of PA on the mac platform. - * they can be ORed together. These should work both for opening and - * checking a device. - */ - -/** Allows PortAudio to change things like the device's frame size, - * which allows for much lower latency, but might disrupt the device - * if other programs are using it, even when you are just Querying - * the device. */ -#define paMacCoreChangeDeviceParameters (0x01) - -/** In combination with the above flag, - * causes the stream opening to fail, unless the exact sample rates - * are supported by the device. */ -#define paMacCoreFailIfConversionRequired (0x02) - -/** These flags set the SR conversion quality, if required. The weird ordering - * allows Maximum Quality to be the default.*/ -#define paMacCoreConversionQualityMin (0x0100) -#define paMacCoreConversionQualityMedium (0x0200) -#define paMacCoreConversionQualityLow (0x0300) -#define paMacCoreConversionQualityHigh (0x0400) -#define paMacCoreConversionQualityMax (0x0000) - -/** - * Here are some "preset" combinations of flags (above) to get to some - * common configurations. THIS IS OVERKILL, but if more flags are added - * it won't be. - */ - -/**This is the default setting: do as much sample rate conversion as possible - * and as little mucking with the device as possible. */ -#define paMacCorePlayNice (0x00) -/**This setting is tuned for pro audio apps. It allows SR conversion on input - and output, but it tries to set the appropriate SR on the device.*/ -#define paMacCorePro (0x01) -/**This is a setting to minimize CPU usage and still play nice.*/ -#define paMacCoreMinimizeCPUButPlayNice (0x0100) -/**This is a setting to minimize CPU usage, even if that means interrupting the device. */ -#define paMacCoreMinimizeCPU (0x0101) - - -#ifdef __cplusplus -} -#endif /** __cplusplus */ - -#endif /** PA_MAC_CORE_H */ diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-3a71f692.css b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-3a71f692.css deleted file mode 100644 index 7405bef579f275474ef94178dfeb94598d6cfe96..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-3a71f692.css +++ /dev/null @@ -1 +0,0 @@ -.settings-wrapper.svelte-k0z87h.svelte-k0z87h{display:flex;justify-self:self-end}.text-button.svelte-k0z87h.svelte-k0z87h{border:1px solid var(--neutral-400);border-radius:var(--radius-sm);font-weight:300;font-size:var(--size-3);text-align:center;color:var(--neutral-400);height:var(--size-5);font-weight:700;padding:0 5px;margin-left:5px}.text-button.svelte-k0z87h.svelte-k0z87h:hover,.text-button.svelte-k0z87h.svelte-k0z87h:focus{color:var(--color-accent);border-color:var(--color-accent)}.controls.svelte-k0z87h.svelte-k0z87h{display:grid;grid-template-columns:1fr 1fr 1fr;margin-top:5px;overflow:hidden;align-items:center}@media (max-width: 320px){.controls.svelte-k0z87h.svelte-k0z87h{display:flex;flex-wrap:wrap}.controls.svelte-k0z87h .svelte-k0z87h{margin:var(--spacing-sm)}.controls.svelte-k0z87h .text-button.svelte-k0z87h{margin-left:0}}.action.svelte-k0z87h.svelte-k0z87h{width:var(--size-5);color:var(--neutral-400);margin-left:var(--spacing-md)}.icon.svelte-k0z87h.svelte-k0z87h:hover,.icon.svelte-k0z87h.svelte-k0z87h:focus{color:var(--color-accent)}.play-pause-wrapper.svelte-k0z87h.svelte-k0z87h{display:flex;justify-self:center}.playback.svelte-k0z87h.svelte-k0z87h{border:1px solid var(--neutral-400);border-radius:var(--radius-sm);width:5.5ch;font-weight:300;font-size:var(--size-3);text-align:center;color:var(--neutral-400);height:var(--size-5);font-weight:700}.playback.svelte-k0z87h.svelte-k0z87h:hover{color:var(--color-accent);border-color:var(--color-accent)}.rewind.svelte-k0z87h.svelte-k0z87h,.skip.svelte-k0z87h.svelte-k0z87h{margin:0 10px;color:var(--neutral-400)}.play-pause-button.svelte-k0z87h.svelte-k0z87h{width:var(--size-8);display:flex;align-items:center;justify-content:center;color:var(--neutral-400);fill:var(--neutral-400)}.component-wrapper.svelte-15pl8d9{padding:var(--size-3)}.timestamps.svelte-15pl8d9{display:flex;justify-content:space-between;align-items:center;width:100%;padding:var(--size-1) 0}#time.svelte-15pl8d9,#duration.svelte-15pl8d9{color:var(--neutral-400)}#trim-duration.svelte-15pl8d9{color:var(--color-accent);margin-right:var(--spacing-sm)}.waveform-container.svelte-15pl8d9{display:flex;align-items:center;justify-content:center;width:var(--size-full)}#waveform.svelte-15pl8d9{width:100%;height:100%;position:relative}.icon-buttons.svelte-rvdo70{display:flex;position:absolute;top:6px;right:6px;gap:var(--size-1)}#mic-select.svelte-pjb0ac.svelte-pjb0ac{height:var(--size-8);background:var(--block-background-fill);padding:0px var(--spacing-xxl);border-radius:var(--radius-full);font-size:var(--text-md);border:1px solid var(--neutral-400)}.controls.svelte-pjb0ac.svelte-pjb0ac{display:flex;align-items:center;justify-content:space-between;flex-wrap:wrap;overflow:hidden}.controls.svelte-pjb0ac select.svelte-pjb0ac{text-overflow:ellipsis;margin:var(--size-2) 0}@media (max-width: 375px){.controls.svelte-pjb0ac select.svelte-pjb0ac{width:100%}}.wrapper.svelte-pjb0ac.svelte-pjb0ac{display:flex;align-items:center;justify-content:center}#record.svelte-pjb0ac.svelte-pjb0ac{margin-right:var(--spacing-md)}.stop-button-paused.svelte-pjb0ac.svelte-pjb0ac{display:none;height:var(--size-8);width:var(--size-20);background-color:var(--block-background-fill);border-radius:var(--radius-3xl);align-items:center;border:1px solid var(--neutral-400);margin-right:5px}.stop-button-paused.svelte-pjb0ac.svelte-pjb0ac:before{content:"";height:var(--size-4);width:var(--size-4);border-radius:var(--radius-full);background:var(--primary-600);margin:0 var(--spacing-xl)}.stop-button.svelte-pjb0ac.svelte-pjb0ac:before{content:"";height:var(--size-4);width:var(--size-4);border-radius:var(--radius-full);background:var(--primary-600);margin:0 var(--spacing-xl);animation:svelte-pjb0ac-scaling 1.8s infinite}.stop-button.svelte-pjb0ac.svelte-pjb0ac{display:none;height:var(--size-8);width:var(--size-20);background-color:var(--block-background-fill);border-radius:var(--radius-3xl);align-items:center;border:1px solid var(--primary-600);margin-right:5px}.record-button.svelte-pjb0ac.svelte-pjb0ac:before{content:"";height:var(--size-4);width:var(--size-4);border-radius:var(--radius-full);background:var(--primary-600);margin:0 var(--spacing-xl)}.record-button.svelte-pjb0ac.svelte-pjb0ac{height:var(--size-8);width:var(--size-24);background-color:var(--block-background-fill);border-radius:var(--radius-3xl);display:flex;align-items:center;border:1px solid var(--neutral-400)}.stop-button.svelte-pjb0ac.svelte-pjb0ac:disabled{cursor:not-allowed}.record-button.svelte-pjb0ac.svelte-pjb0ac:disabled{cursor:not-allowed;opacity:.5}@keyframes svelte-pjb0ac-scaling{0%{background-color:var(--primary-600);scale:1}50%{background-color:var(--primary-600);scale:1.2}to{background-color:var(--primary-600);scale:1}}.pause-button.svelte-pjb0ac.svelte-pjb0ac{display:none;height:var(--size-8);width:var(--size-20);border:1px solid var(--neutral-400);border-radius:var(--radius-3xl);padding:var(--spacing-md)}.resume-button.svelte-pjb0ac.svelte-pjb0ac{display:none;height:var(--size-8);width:var(--size-20);border:1px solid var(--neutral-400);border-radius:var(--radius-3xl);padding:var(--spacing-xl);line-height:1px;font-size:var(--text-md)}::part(region){border-radius:var(--radius-md);height:98%!important;border:1px solid var(--color-accent);border-width:1px 3px}::part(region-handle){width:5px!important;border:none}#microphone.svelte-imtedr{width:100%;display:none}.component-wrapper.svelte-imtedr{padding:var(--size-3)}#timestamps.svelte-imtedr{display:flex;justify-content:space-between;align-items:center;width:100%;padding:var(--size-1) 0;margin:var(--spacing-md) 0}#time.svelte-imtedr,#duration.svelte-imtedr{color:var(--neutral-400)}#trim-duration.svelte-imtedr{color:var(--color-accent);margin-right:var(--spacing-sm)}.mic-wrap.svelte-16e5vwh{display:block;align-items:center;margin:var(--spacing-xl)}.stop-button-paused.svelte-16e5vwh{display:none;height:var(--size-8);width:var(--size-20);background-color:var(--block-background-fill);border-radius:var(--radius-3xl);align-items:center;border:1px solid var(--neutral-400);margin-right:5px}.stop-button-paused.svelte-16e5vwh:before{content:"";height:var(--size-4);width:var(--size-4);border-radius:var(--radius-full);background:var(--primary-600);margin:0 var(--spacing-xl)}.stop-button.svelte-16e5vwh:before{content:"";height:var(--size-4);width:var(--size-4);border-radius:var(--radius-full);background:var(--primary-600);margin:0 var(--spacing-xl);animation:scaling 1.8s infinite}.stop-button.svelte-16e5vwh{height:var(--size-8);width:var(--size-20);background-color:var(--block-background-fill);border-radius:var(--radius-3xl);align-items:center;border:1px solid var(--primary-600);margin-right:5px;display:flex}.record-button.svelte-16e5vwh:before{content:"";height:var(--size-4);width:var(--size-4);border-radius:var(--radius-full);background:var(--primary-600);margin:0 var(--spacing-xl)}.record-button.svelte-16e5vwh{height:var(--size-8);width:var(--size-24);background-color:var(--block-background-fill);border-radius:var(--radius-3xl);display:flex;align-items:center;border:1px solid var(--neutral-400)}.source-selection.svelte-10shjqk{display:flex;align-items:center;justify-content:center;border-top:1px solid var(--border-color-primary);width:95%;margin:0 auto}.icon.svelte-10shjqk{width:22px;height:22px;margin:var(--spacing-lg) var(--spacing-xs);padding:var(--spacing-xs);color:var(--neutral-400);border-radius:var(--radius-md)}.icon.svelte-10shjqk:hover,.icon.svelte-10shjqk:focus{color:var(--color-accent)} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/importlib_resources/tests/util.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/importlib_resources/tests/util.py deleted file mode 100644 index 066f41130d3b4b5fcc9aa22091e382f516b136aa..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/importlib_resources/tests/util.py +++ /dev/null @@ -1,165 +0,0 @@ -import abc -import importlib -import io -import sys -import types -import pathlib -import contextlib - -from . import data01 -from ..abc import ResourceReader -from ._compat import import_helper, os_helper -from . import zip as zip_ - - -from importlib.machinery import ModuleSpec - - -class Reader(ResourceReader): - def __init__(self, **kwargs): - vars(self).update(kwargs) - - def get_resource_reader(self, package): - return self - - def open_resource(self, path): - self._path = path - if isinstance(self.file, Exception): - raise self.file - return self.file - - def resource_path(self, path_): - self._path = path_ - if isinstance(self.path, Exception): - raise self.path - return self.path - - def is_resource(self, path_): - self._path = path_ - if isinstance(self.path, Exception): - raise self.path - - def part(entry): - return entry.split('/') - - return any( - len(parts) == 1 and parts[0] == path_ for parts in map(part, self._contents) - ) - - def contents(self): - if isinstance(self.path, Exception): - raise self.path - yield from self._contents - - -def create_package_from_loader(loader, is_package=True): - name = 'testingpackage' - module = types.ModuleType(name) - spec = ModuleSpec(name, loader, origin='does-not-exist', is_package=is_package) - module.__spec__ = spec - module.__loader__ = loader - return module - - -def create_package(file=None, path=None, is_package=True, contents=()): - return create_package_from_loader( - Reader(file=file, path=path, _contents=contents), - is_package, - ) - - -class CommonTests(metaclass=abc.ABCMeta): - """ - Tests shared by test_open, test_path, and test_read. - """ - - @abc.abstractmethod - def execute(self, package, path): - """ - Call the pertinent legacy API function (e.g. open_text, path) - on package and path. - """ - - def test_package_name(self): - """ - Passing in the package name should succeed. - """ - self.execute(data01.__name__, 'utf-8.file') - - def test_package_object(self): - """ - Passing in the package itself should succeed. - """ - self.execute(data01, 'utf-8.file') - - def test_string_path(self): - """ - Passing in a string for the path should succeed. - """ - path = 'utf-8.file' - self.execute(data01, path) - - def test_pathlib_path(self): - """ - Passing in a pathlib.PurePath object for the path should succeed. - """ - path = pathlib.PurePath('utf-8.file') - self.execute(data01, path) - - def test_importing_module_as_side_effect(self): - """ - The anchor package can already be imported. - """ - del sys.modules[data01.__name__] - self.execute(data01.__name__, 'utf-8.file') - - def test_missing_path(self): - """ - Attempting to open or read or request the path for a - non-existent path should succeed if open_resource - can return a viable data stream. - """ - bytes_data = io.BytesIO(b'Hello, world!') - package = create_package(file=bytes_data, path=FileNotFoundError()) - self.execute(package, 'utf-8.file') - self.assertEqual(package.__loader__._path, 'utf-8.file') - - def test_extant_path(self): - # Attempting to open or read or request the path when the - # path does exist should still succeed. Does not assert - # anything about the result. - bytes_data = io.BytesIO(b'Hello, world!') - # any path that exists - path = __file__ - package = create_package(file=bytes_data, path=path) - self.execute(package, 'utf-8.file') - self.assertEqual(package.__loader__._path, 'utf-8.file') - - def test_useless_loader(self): - package = create_package(file=FileNotFoundError(), path=FileNotFoundError()) - with self.assertRaises(FileNotFoundError): - self.execute(package, 'utf-8.file') - - -class ZipSetupBase: - ZIP_MODULE = 'data01' - - def setUp(self): - self.fixtures = contextlib.ExitStack() - self.addCleanup(self.fixtures.close) - - modules = import_helper.modules_setup() - self.addCleanup(import_helper.modules_cleanup, *modules) - - temp_dir = self.fixtures.enter_context(os_helper.temp_dir()) - modules = pathlib.Path(temp_dir) / 'zipped modules.zip' - src_path = pathlib.Path(__file__).parent.joinpath(self.ZIP_MODULE) - self.fixtures.enter_context( - import_helper.DirsOnSysPath(str(zip_.make_zip_file(src_path, modules))) - ) - - self.data = importlib.import_module(self.ZIP_MODULE) - - -class ZipSetup(ZipSetupBase): - pass diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/npy_pkg_config.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/npy_pkg_config.py deleted file mode 100644 index f6e3ad3974ca63115e1f8124e743235bb300f1a1..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/npy_pkg_config.py +++ /dev/null @@ -1,437 +0,0 @@ -import sys -import re -import os - -from configparser import RawConfigParser - -__all__ = ['FormatError', 'PkgNotFound', 'LibraryInfo', 'VariableSet', - 'read_config', 'parse_flags'] - -_VAR = re.compile(r'\$\{([a-zA-Z0-9_-]+)\}') - -class FormatError(OSError): - """ - Exception thrown when there is a problem parsing a configuration file. - - """ - def __init__(self, msg): - self.msg = msg - - def __str__(self): - return self.msg - -class PkgNotFound(OSError): - """Exception raised when a package can not be located.""" - def __init__(self, msg): - self.msg = msg - - def __str__(self): - return self.msg - -def parse_flags(line): - """ - Parse a line from a config file containing compile flags. - - Parameters - ---------- - line : str - A single line containing one or more compile flags. - - Returns - ------- - d : dict - Dictionary of parsed flags, split into relevant categories. - These categories are the keys of `d`: - - * 'include_dirs' - * 'library_dirs' - * 'libraries' - * 'macros' - * 'ignored' - - """ - d = {'include_dirs': [], 'library_dirs': [], 'libraries': [], - 'macros': [], 'ignored': []} - - flags = (' ' + line).split(' -') - for flag in flags: - flag = '-' + flag - if len(flag) > 0: - if flag.startswith('-I'): - d['include_dirs'].append(flag[2:].strip()) - elif flag.startswith('-L'): - d['library_dirs'].append(flag[2:].strip()) - elif flag.startswith('-l'): - d['libraries'].append(flag[2:].strip()) - elif flag.startswith('-D'): - d['macros'].append(flag[2:].strip()) - else: - d['ignored'].append(flag) - - return d - -def _escape_backslash(val): - return val.replace('\\', '\\\\') - -class LibraryInfo: - """ - Object containing build information about a library. - - Parameters - ---------- - name : str - The library name. - description : str - Description of the library. - version : str - Version string. - sections : dict - The sections of the configuration file for the library. The keys are - the section headers, the values the text under each header. - vars : class instance - A `VariableSet` instance, which contains ``(name, value)`` pairs for - variables defined in the configuration file for the library. - requires : sequence, optional - The required libraries for the library to be installed. - - Notes - ----- - All input parameters (except "sections" which is a method) are available as - attributes of the same name. - - """ - def __init__(self, name, description, version, sections, vars, requires=None): - self.name = name - self.description = description - if requires: - self.requires = requires - else: - self.requires = [] - self.version = version - self._sections = sections - self.vars = vars - - def sections(self): - """ - Return the section headers of the config file. - - Parameters - ---------- - None - - Returns - ------- - keys : list of str - The list of section headers. - - """ - return list(self._sections.keys()) - - def cflags(self, section="default"): - val = self.vars.interpolate(self._sections[section]['cflags']) - return _escape_backslash(val) - - def libs(self, section="default"): - val = self.vars.interpolate(self._sections[section]['libs']) - return _escape_backslash(val) - - def __str__(self): - m = ['Name: %s' % self.name, 'Description: %s' % self.description] - if self.requires: - m.append('Requires:') - else: - m.append('Requires: %s' % ",".join(self.requires)) - m.append('Version: %s' % self.version) - - return "\n".join(m) - -class VariableSet: - """ - Container object for the variables defined in a config file. - - `VariableSet` can be used as a plain dictionary, with the variable names - as keys. - - Parameters - ---------- - d : dict - Dict of items in the "variables" section of the configuration file. - - """ - def __init__(self, d): - self._raw_data = dict([(k, v) for k, v in d.items()]) - - self._re = {} - self._re_sub = {} - - self._init_parse() - - def _init_parse(self): - for k, v in self._raw_data.items(): - self._init_parse_var(k, v) - - def _init_parse_var(self, name, value): - self._re[name] = re.compile(r'\$\{%s\}' % name) - self._re_sub[name] = value - - def interpolate(self, value): - # Brute force: we keep interpolating until there is no '${var}' anymore - # or until interpolated string is equal to input string - def _interpolate(value): - for k in self._re.keys(): - value = self._re[k].sub(self._re_sub[k], value) - return value - while _VAR.search(value): - nvalue = _interpolate(value) - if nvalue == value: - break - value = nvalue - - return value - - def variables(self): - """ - Return the list of variable names. - - Parameters - ---------- - None - - Returns - ------- - names : list of str - The names of all variables in the `VariableSet` instance. - - """ - return list(self._raw_data.keys()) - - # Emulate a dict to set/get variables values - def __getitem__(self, name): - return self._raw_data[name] - - def __setitem__(self, name, value): - self._raw_data[name] = value - self._init_parse_var(name, value) - -def parse_meta(config): - if not config.has_section('meta'): - raise FormatError("No meta section found !") - - d = dict(config.items('meta')) - - for k in ['name', 'description', 'version']: - if not k in d: - raise FormatError("Option %s (section [meta]) is mandatory, " - "but not found" % k) - - if not 'requires' in d: - d['requires'] = [] - - return d - -def parse_variables(config): - if not config.has_section('variables'): - raise FormatError("No variables section found !") - - d = {} - - for name, value in config.items("variables"): - d[name] = value - - return VariableSet(d) - -def parse_sections(config): - return meta_d, r - -def pkg_to_filename(pkg_name): - return "%s.ini" % pkg_name - -def parse_config(filename, dirs=None): - if dirs: - filenames = [os.path.join(d, filename) for d in dirs] - else: - filenames = [filename] - - config = RawConfigParser() - - n = config.read(filenames) - if not len(n) >= 1: - raise PkgNotFound("Could not find file(s) %s" % str(filenames)) - - # Parse meta and variables sections - meta = parse_meta(config) - - vars = {} - if config.has_section('variables'): - for name, value in config.items("variables"): - vars[name] = _escape_backslash(value) - - # Parse "normal" sections - secs = [s for s in config.sections() if not s in ['meta', 'variables']] - sections = {} - - requires = {} - for s in secs: - d = {} - if config.has_option(s, "requires"): - requires[s] = config.get(s, 'requires') - - for name, value in config.items(s): - d[name] = value - sections[s] = d - - return meta, vars, sections, requires - -def _read_config_imp(filenames, dirs=None): - def _read_config(f): - meta, vars, sections, reqs = parse_config(f, dirs) - # recursively add sections and variables of required libraries - for rname, rvalue in reqs.items(): - nmeta, nvars, nsections, nreqs = _read_config(pkg_to_filename(rvalue)) - - # Update var dict for variables not in 'top' config file - for k, v in nvars.items(): - if not k in vars: - vars[k] = v - - # Update sec dict - for oname, ovalue in nsections[rname].items(): - if ovalue: - sections[rname][oname] += ' %s' % ovalue - - return meta, vars, sections, reqs - - meta, vars, sections, reqs = _read_config(filenames) - - # FIXME: document this. If pkgname is defined in the variables section, and - # there is no pkgdir variable defined, pkgdir is automatically defined to - # the path of pkgname. This requires the package to be imported to work - if not 'pkgdir' in vars and "pkgname" in vars: - pkgname = vars["pkgname"] - if not pkgname in sys.modules: - raise ValueError("You should import %s to get information on %s" % - (pkgname, meta["name"])) - - mod = sys.modules[pkgname] - vars["pkgdir"] = _escape_backslash(os.path.dirname(mod.__file__)) - - return LibraryInfo(name=meta["name"], description=meta["description"], - version=meta["version"], sections=sections, vars=VariableSet(vars)) - -# Trivial cache to cache LibraryInfo instances creation. To be really -# efficient, the cache should be handled in read_config, since a same file can -# be parsed many time outside LibraryInfo creation, but I doubt this will be a -# problem in practice -_CACHE = {} -def read_config(pkgname, dirs=None): - """ - Return library info for a package from its configuration file. - - Parameters - ---------- - pkgname : str - Name of the package (should match the name of the .ini file, without - the extension, e.g. foo for the file foo.ini). - dirs : sequence, optional - If given, should be a sequence of directories - usually including - the NumPy base directory - where to look for npy-pkg-config files. - - Returns - ------- - pkginfo : class instance - The `LibraryInfo` instance containing the build information. - - Raises - ------ - PkgNotFound - If the package is not found. - - See Also - -------- - misc_util.get_info, misc_util.get_pkg_info - - Examples - -------- - >>> npymath_info = np.distutils.npy_pkg_config.read_config('npymath') - >>> type(npymath_info) - - >>> print(npymath_info) - Name: npymath - Description: Portable, core math library implementing C99 standard - Requires: - Version: 0.1 #random - - """ - try: - return _CACHE[pkgname] - except KeyError: - v = _read_config_imp(pkg_to_filename(pkgname), dirs) - _CACHE[pkgname] = v - return v - -# TODO: -# - implements version comparison (modversion + atleast) - -# pkg-config simple emulator - useful for debugging, and maybe later to query -# the system -if __name__ == '__main__': - from optparse import OptionParser - import glob - - parser = OptionParser() - parser.add_option("--cflags", dest="cflags", action="store_true", - help="output all preprocessor and compiler flags") - parser.add_option("--libs", dest="libs", action="store_true", - help="output all linker flags") - parser.add_option("--use-section", dest="section", - help="use this section instead of default for options") - parser.add_option("--version", dest="version", action="store_true", - help="output version") - parser.add_option("--atleast-version", dest="min_version", - help="Minimal version") - parser.add_option("--list-all", dest="list_all", action="store_true", - help="Minimal version") - parser.add_option("--define-variable", dest="define_variable", - help="Replace variable with the given value") - - (options, args) = parser.parse_args(sys.argv) - - if len(args) < 2: - raise ValueError("Expect package name on the command line:") - - if options.list_all: - files = glob.glob("*.ini") - for f in files: - info = read_config(f) - print("%s\t%s - %s" % (info.name, info.name, info.description)) - - pkg_name = args[1] - d = os.environ.get('NPY_PKG_CONFIG_PATH') - if d: - info = read_config(pkg_name, ['numpy/core/lib/npy-pkg-config', '.', d]) - else: - info = read_config(pkg_name, ['numpy/core/lib/npy-pkg-config', '.']) - - if options.section: - section = options.section - else: - section = "default" - - if options.define_variable: - m = re.search(r'([\S]+)=([\S]+)', options.define_variable) - if not m: - raise ValueError("--define-variable option should be of " - "the form --define-variable=foo=bar") - else: - name = m.group(1) - value = m.group(2) - info.vars[name] = value - - if options.cflags: - print(info.cflags(section)) - if options.libs: - print(info.libs(section)) - if options.version: - print(info.version) - if options.min_version: - print(info.version >= options.min_version) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/tests/test_twodim_base.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/tests/test_twodim_base.py deleted file mode 100644 index eb008c6002c86c94b180533230f849c909d10f39..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/tests/test_twodim_base.py +++ /dev/null @@ -1,541 +0,0 @@ -"""Test functions for matrix module - -""" -from numpy.testing import ( - assert_equal, assert_array_equal, assert_array_max_ulp, - assert_array_almost_equal, assert_raises, assert_ -) -from numpy import ( - arange, add, fliplr, flipud, zeros, ones, eye, array, diag, histogram2d, - tri, mask_indices, triu_indices, triu_indices_from, tril_indices, - tril_indices_from, vander, -) -import numpy as np - -import pytest - - -def get_mat(n): - data = arange(n) - data = add.outer(data, data) - return data - - -class TestEye: - def test_basic(self): - assert_equal(eye(4), - array([[1, 0, 0, 0], - [0, 1, 0, 0], - [0, 0, 1, 0], - [0, 0, 0, 1]])) - - assert_equal(eye(4, dtype='f'), - array([[1, 0, 0, 0], - [0, 1, 0, 0], - [0, 0, 1, 0], - [0, 0, 0, 1]], 'f')) - - assert_equal(eye(3) == 1, - eye(3, dtype=bool)) - - def test_uint64(self): - # Regression test for gh-9982 - assert_equal(eye(np.uint64(2), dtype=int), array([[1, 0], [0, 1]])) - assert_equal(eye(np.uint64(2), M=np.uint64(4), k=np.uint64(1)), - array([[0, 1, 0, 0], [0, 0, 1, 0]])) - - def test_diag(self): - assert_equal(eye(4, k=1), - array([[0, 1, 0, 0], - [0, 0, 1, 0], - [0, 0, 0, 1], - [0, 0, 0, 0]])) - - assert_equal(eye(4, k=-1), - array([[0, 0, 0, 0], - [1, 0, 0, 0], - [0, 1, 0, 0], - [0, 0, 1, 0]])) - - def test_2d(self): - assert_equal(eye(4, 3), - array([[1, 0, 0], - [0, 1, 0], - [0, 0, 1], - [0, 0, 0]])) - - assert_equal(eye(3, 4), - array([[1, 0, 0, 0], - [0, 1, 0, 0], - [0, 0, 1, 0]])) - - def test_diag2d(self): - assert_equal(eye(3, 4, k=2), - array([[0, 0, 1, 0], - [0, 0, 0, 1], - [0, 0, 0, 0]])) - - assert_equal(eye(4, 3, k=-2), - array([[0, 0, 0], - [0, 0, 0], - [1, 0, 0], - [0, 1, 0]])) - - def test_eye_bounds(self): - assert_equal(eye(2, 2, 1), [[0, 1], [0, 0]]) - assert_equal(eye(2, 2, -1), [[0, 0], [1, 0]]) - assert_equal(eye(2, 2, 2), [[0, 0], [0, 0]]) - assert_equal(eye(2, 2, -2), [[0, 0], [0, 0]]) - assert_equal(eye(3, 2, 2), [[0, 0], [0, 0], [0, 0]]) - assert_equal(eye(3, 2, 1), [[0, 1], [0, 0], [0, 0]]) - assert_equal(eye(3, 2, -1), [[0, 0], [1, 0], [0, 1]]) - assert_equal(eye(3, 2, -2), [[0, 0], [0, 0], [1, 0]]) - assert_equal(eye(3, 2, -3), [[0, 0], [0, 0], [0, 0]]) - - def test_strings(self): - assert_equal(eye(2, 2, dtype='S3'), - [[b'1', b''], [b'', b'1']]) - - def test_bool(self): - assert_equal(eye(2, 2, dtype=bool), [[True, False], [False, True]]) - - def test_order(self): - mat_c = eye(4, 3, k=-1) - mat_f = eye(4, 3, k=-1, order='F') - assert_equal(mat_c, mat_f) - assert mat_c.flags.c_contiguous - assert not mat_c.flags.f_contiguous - assert not mat_f.flags.c_contiguous - assert mat_f.flags.f_contiguous - - -class TestDiag: - def test_vector(self): - vals = (100 * arange(5)).astype('l') - b = zeros((5, 5)) - for k in range(5): - b[k, k] = vals[k] - assert_equal(diag(vals), b) - b = zeros((7, 7)) - c = b.copy() - for k in range(5): - b[k, k + 2] = vals[k] - c[k + 2, k] = vals[k] - assert_equal(diag(vals, k=2), b) - assert_equal(diag(vals, k=-2), c) - - def test_matrix(self, vals=None): - if vals is None: - vals = (100 * get_mat(5) + 1).astype('l') - b = zeros((5,)) - for k in range(5): - b[k] = vals[k, k] - assert_equal(diag(vals), b) - b = b * 0 - for k in range(3): - b[k] = vals[k, k + 2] - assert_equal(diag(vals, 2), b[:3]) - for k in range(3): - b[k] = vals[k + 2, k] - assert_equal(diag(vals, -2), b[:3]) - - def test_fortran_order(self): - vals = array((100 * get_mat(5) + 1), order='F', dtype='l') - self.test_matrix(vals) - - def test_diag_bounds(self): - A = [[1, 2], [3, 4], [5, 6]] - assert_equal(diag(A, k=2), []) - assert_equal(diag(A, k=1), [2]) - assert_equal(diag(A, k=0), [1, 4]) - assert_equal(diag(A, k=-1), [3, 6]) - assert_equal(diag(A, k=-2), [5]) - assert_equal(diag(A, k=-3), []) - - def test_failure(self): - assert_raises(ValueError, diag, [[[1]]]) - - -class TestFliplr: - def test_basic(self): - assert_raises(ValueError, fliplr, ones(4)) - a = get_mat(4) - b = a[:, ::-1] - assert_equal(fliplr(a), b) - a = [[0, 1, 2], - [3, 4, 5]] - b = [[2, 1, 0], - [5, 4, 3]] - assert_equal(fliplr(a), b) - - -class TestFlipud: - def test_basic(self): - a = get_mat(4) - b = a[::-1, :] - assert_equal(flipud(a), b) - a = [[0, 1, 2], - [3, 4, 5]] - b = [[3, 4, 5], - [0, 1, 2]] - assert_equal(flipud(a), b) - - -class TestHistogram2d: - def test_simple(self): - x = array( - [0.41702200, 0.72032449, 1.1437481e-4, 0.302332573, 0.146755891]) - y = array( - [0.09233859, 0.18626021, 0.34556073, 0.39676747, 0.53881673]) - xedges = np.linspace(0, 1, 10) - yedges = np.linspace(0, 1, 10) - H = histogram2d(x, y, (xedges, yedges))[0] - answer = array( - [[0, 0, 0, 1, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 1, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 1, 0, 0, 0, 0, 0, 0], - [0, 1, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0]]) - assert_array_equal(H.T, answer) - H = histogram2d(x, y, xedges)[0] - assert_array_equal(H.T, answer) - H, xedges, yedges = histogram2d(list(range(10)), list(range(10))) - assert_array_equal(H, eye(10, 10)) - assert_array_equal(xedges, np.linspace(0, 9, 11)) - assert_array_equal(yedges, np.linspace(0, 9, 11)) - - def test_asym(self): - x = array([1, 1, 2, 3, 4, 4, 4, 5]) - y = array([1, 3, 2, 0, 1, 2, 3, 4]) - H, xed, yed = histogram2d( - x, y, (6, 5), range=[[0, 6], [0, 5]], density=True) - answer = array( - [[0., 0, 0, 0, 0], - [0, 1, 0, 1, 0], - [0, 0, 1, 0, 0], - [1, 0, 0, 0, 0], - [0, 1, 1, 1, 0], - [0, 0, 0, 0, 1]]) - assert_array_almost_equal(H, answer/8., 3) - assert_array_equal(xed, np.linspace(0, 6, 7)) - assert_array_equal(yed, np.linspace(0, 5, 6)) - - def test_density(self): - x = array([1, 2, 3, 1, 2, 3, 1, 2, 3]) - y = array([1, 1, 1, 2, 2, 2, 3, 3, 3]) - H, xed, yed = histogram2d( - x, y, [[1, 2, 3, 5], [1, 2, 3, 5]], density=True) - answer = array([[1, 1, .5], - [1, 1, .5], - [.5, .5, .25]])/9. - assert_array_almost_equal(H, answer, 3) - - def test_all_outliers(self): - r = np.random.rand(100) + 1. + 1e6 # histogramdd rounds by decimal=6 - H, xed, yed = histogram2d(r, r, (4, 5), range=([0, 1], [0, 1])) - assert_array_equal(H, 0) - - def test_empty(self): - a, edge1, edge2 = histogram2d([], [], bins=([0, 1], [0, 1])) - assert_array_max_ulp(a, array([[0.]])) - - a, edge1, edge2 = histogram2d([], [], bins=4) - assert_array_max_ulp(a, np.zeros((4, 4))) - - def test_binparameter_combination(self): - x = array( - [0, 0.09207008, 0.64575234, 0.12875982, 0.47390599, - 0.59944483, 1]) - y = array( - [0, 0.14344267, 0.48988575, 0.30558665, 0.44700682, - 0.15886423, 1]) - edges = (0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1) - H, xe, ye = histogram2d(x, y, (edges, 4)) - answer = array( - [[2., 0., 0., 0.], - [0., 1., 0., 0.], - [0., 0., 0., 0.], - [0., 0., 0., 0.], - [0., 1., 0., 0.], - [1., 0., 0., 0.], - [0., 1., 0., 0.], - [0., 0., 0., 0.], - [0., 0., 0., 0.], - [0., 0., 0., 1.]]) - assert_array_equal(H, answer) - assert_array_equal(ye, array([0., 0.25, 0.5, 0.75, 1])) - H, xe, ye = histogram2d(x, y, (4, edges)) - answer = array( - [[1., 1., 0., 1., 0., 0., 0., 0., 0., 0.], - [0., 0., 0., 0., 1., 0., 0., 0., 0., 0.], - [0., 1., 0., 0., 1., 0., 0., 0., 0., 0.], - [0., 0., 0., 0., 0., 0., 0., 0., 0., 1.]]) - assert_array_equal(H, answer) - assert_array_equal(xe, array([0., 0.25, 0.5, 0.75, 1])) - - def test_dispatch(self): - class ShouldDispatch: - def __array_function__(self, function, types, args, kwargs): - return types, args, kwargs - - xy = [1, 2] - s_d = ShouldDispatch() - r = histogram2d(s_d, xy) - # Cannot use assert_equal since that dispatches... - assert_(r == ((ShouldDispatch,), (s_d, xy), {})) - r = histogram2d(xy, s_d) - assert_(r == ((ShouldDispatch,), (xy, s_d), {})) - r = histogram2d(xy, xy, bins=s_d) - assert_(r, ((ShouldDispatch,), (xy, xy), dict(bins=s_d))) - r = histogram2d(xy, xy, bins=[s_d, 5]) - assert_(r, ((ShouldDispatch,), (xy, xy), dict(bins=[s_d, 5]))) - assert_raises(Exception, histogram2d, xy, xy, bins=[s_d]) - r = histogram2d(xy, xy, weights=s_d) - assert_(r, ((ShouldDispatch,), (xy, xy), dict(weights=s_d))) - - @pytest.mark.parametrize(("x_len", "y_len"), [(10, 11), (20, 19)]) - def test_bad_length(self, x_len, y_len): - x, y = np.ones(x_len), np.ones(y_len) - with pytest.raises(ValueError, - match='x and y must have the same length.'): - histogram2d(x, y) - - -class TestTri: - def test_dtype(self): - out = array([[1, 0, 0], - [1, 1, 0], - [1, 1, 1]]) - assert_array_equal(tri(3), out) - assert_array_equal(tri(3, dtype=bool), out.astype(bool)) - - -def test_tril_triu_ndim2(): - for dtype in np.typecodes['AllFloat'] + np.typecodes['AllInteger']: - a = np.ones((2, 2), dtype=dtype) - b = np.tril(a) - c = np.triu(a) - assert_array_equal(b, [[1, 0], [1, 1]]) - assert_array_equal(c, b.T) - # should return the same dtype as the original array - assert_equal(b.dtype, a.dtype) - assert_equal(c.dtype, a.dtype) - - -def test_tril_triu_ndim3(): - for dtype in np.typecodes['AllFloat'] + np.typecodes['AllInteger']: - a = np.array([ - [[1, 1], [1, 1]], - [[1, 1], [1, 0]], - [[1, 1], [0, 0]], - ], dtype=dtype) - a_tril_desired = np.array([ - [[1, 0], [1, 1]], - [[1, 0], [1, 0]], - [[1, 0], [0, 0]], - ], dtype=dtype) - a_triu_desired = np.array([ - [[1, 1], [0, 1]], - [[1, 1], [0, 0]], - [[1, 1], [0, 0]], - ], dtype=dtype) - a_triu_observed = np.triu(a) - a_tril_observed = np.tril(a) - assert_array_equal(a_triu_observed, a_triu_desired) - assert_array_equal(a_tril_observed, a_tril_desired) - assert_equal(a_triu_observed.dtype, a.dtype) - assert_equal(a_tril_observed.dtype, a.dtype) - - -def test_tril_triu_with_inf(): - # Issue 4859 - arr = np.array([[1, 1, np.inf], - [1, 1, 1], - [np.inf, 1, 1]]) - out_tril = np.array([[1, 0, 0], - [1, 1, 0], - [np.inf, 1, 1]]) - out_triu = out_tril.T - assert_array_equal(np.triu(arr), out_triu) - assert_array_equal(np.tril(arr), out_tril) - - -def test_tril_triu_dtype(): - # Issue 4916 - # tril and triu should return the same dtype as input - for c in np.typecodes['All']: - if c == 'V': - continue - arr = np.zeros((3, 3), dtype=c) - assert_equal(np.triu(arr).dtype, arr.dtype) - assert_equal(np.tril(arr).dtype, arr.dtype) - - # check special cases - arr = np.array([['2001-01-01T12:00', '2002-02-03T13:56'], - ['2004-01-01T12:00', '2003-01-03T13:45']], - dtype='datetime64') - assert_equal(np.triu(arr).dtype, arr.dtype) - assert_equal(np.tril(arr).dtype, arr.dtype) - - arr = np.zeros((3, 3), dtype='f4,f4') - assert_equal(np.triu(arr).dtype, arr.dtype) - assert_equal(np.tril(arr).dtype, arr.dtype) - - -def test_mask_indices(): - # simple test without offset - iu = mask_indices(3, np.triu) - a = np.arange(9).reshape(3, 3) - assert_array_equal(a[iu], array([0, 1, 2, 4, 5, 8])) - # Now with an offset - iu1 = mask_indices(3, np.triu, 1) - assert_array_equal(a[iu1], array([1, 2, 5])) - - -def test_tril_indices(): - # indices without and with offset - il1 = tril_indices(4) - il2 = tril_indices(4, k=2) - il3 = tril_indices(4, m=5) - il4 = tril_indices(4, k=2, m=5) - - a = np.array([[1, 2, 3, 4], - [5, 6, 7, 8], - [9, 10, 11, 12], - [13, 14, 15, 16]]) - b = np.arange(1, 21).reshape(4, 5) - - # indexing: - assert_array_equal(a[il1], - array([1, 5, 6, 9, 10, 11, 13, 14, 15, 16])) - assert_array_equal(b[il3], - array([1, 6, 7, 11, 12, 13, 16, 17, 18, 19])) - - # And for assigning values: - a[il1] = -1 - assert_array_equal(a, - array([[-1, 2, 3, 4], - [-1, -1, 7, 8], - [-1, -1, -1, 12], - [-1, -1, -1, -1]])) - b[il3] = -1 - assert_array_equal(b, - array([[-1, 2, 3, 4, 5], - [-1, -1, 8, 9, 10], - [-1, -1, -1, 14, 15], - [-1, -1, -1, -1, 20]])) - # These cover almost the whole array (two diagonals right of the main one): - a[il2] = -10 - assert_array_equal(a, - array([[-10, -10, -10, 4], - [-10, -10, -10, -10], - [-10, -10, -10, -10], - [-10, -10, -10, -10]])) - b[il4] = -10 - assert_array_equal(b, - array([[-10, -10, -10, 4, 5], - [-10, -10, -10, -10, 10], - [-10, -10, -10, -10, -10], - [-10, -10, -10, -10, -10]])) - - -class TestTriuIndices: - def test_triu_indices(self): - iu1 = triu_indices(4) - iu2 = triu_indices(4, k=2) - iu3 = triu_indices(4, m=5) - iu4 = triu_indices(4, k=2, m=5) - - a = np.array([[1, 2, 3, 4], - [5, 6, 7, 8], - [9, 10, 11, 12], - [13, 14, 15, 16]]) - b = np.arange(1, 21).reshape(4, 5) - - # Both for indexing: - assert_array_equal(a[iu1], - array([1, 2, 3, 4, 6, 7, 8, 11, 12, 16])) - assert_array_equal(b[iu3], - array([1, 2, 3, 4, 5, 7, 8, 9, - 10, 13, 14, 15, 19, 20])) - - # And for assigning values: - a[iu1] = -1 - assert_array_equal(a, - array([[-1, -1, -1, -1], - [5, -1, -1, -1], - [9, 10, -1, -1], - [13, 14, 15, -1]])) - b[iu3] = -1 - assert_array_equal(b, - array([[-1, -1, -1, -1, -1], - [6, -1, -1, -1, -1], - [11, 12, -1, -1, -1], - [16, 17, 18, -1, -1]])) - - # These cover almost the whole array (two diagonals right of the - # main one): - a[iu2] = -10 - assert_array_equal(a, - array([[-1, -1, -10, -10], - [5, -1, -1, -10], - [9, 10, -1, -1], - [13, 14, 15, -1]])) - b[iu4] = -10 - assert_array_equal(b, - array([[-1, -1, -10, -10, -10], - [6, -1, -1, -10, -10], - [11, 12, -1, -1, -10], - [16, 17, 18, -1, -1]])) - - -class TestTrilIndicesFrom: - def test_exceptions(self): - assert_raises(ValueError, tril_indices_from, np.ones((2,))) - assert_raises(ValueError, tril_indices_from, np.ones((2, 2, 2))) - # assert_raises(ValueError, tril_indices_from, np.ones((2, 3))) - - -class TestTriuIndicesFrom: - def test_exceptions(self): - assert_raises(ValueError, triu_indices_from, np.ones((2,))) - assert_raises(ValueError, triu_indices_from, np.ones((2, 2, 2))) - # assert_raises(ValueError, triu_indices_from, np.ones((2, 3))) - - -class TestVander: - def test_basic(self): - c = np.array([0, 1, -2, 3]) - v = vander(c) - powers = np.array([[0, 0, 0, 0, 1], - [1, 1, 1, 1, 1], - [16, -8, 4, -2, 1], - [81, 27, 9, 3, 1]]) - # Check default value of N: - assert_array_equal(v, powers[:, 1:]) - # Check a range of N values, including 0 and 5 (greater than default) - m = powers.shape[1] - for n in range(6): - v = vander(c, N=n) - assert_array_equal(v, powers[:, m-n:m]) - - def test_dtypes(self): - c = array([11, -12, 13], dtype=np.int8) - v = vander(c) - expected = np.array([[121, 11, 1], - [144, -12, 1], - [169, 13, 1]]) - assert_array_equal(v, expected) - - c = array([1.0+1j, 1.0-1j]) - v = vander(c, N=3) - expected = np.array([[2j, 1+1j, 1], - [-2j, 1-1j, 1]]) - # The data is floating point, but the values are small integers, - # so assert_array_equal *should* be safe here (rather than, say, - # assert_array_almost_equal). - assert_array_equal(v, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/reshape/melt.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/reshape/melt.py deleted file mode 100644 index 74e6a6a28ccb01b8ca0d52944bd385bfae706582..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/reshape/melt.py +++ /dev/null @@ -1,533 +0,0 @@ -from __future__ import annotations - -import re -from typing import TYPE_CHECKING - -import numpy as np - -from pandas.util._decorators import Appender - -from pandas.core.dtypes.common import is_list_like -from pandas.core.dtypes.concat import concat_compat -from pandas.core.dtypes.missing import notna - -import pandas.core.algorithms as algos -from pandas.core.arrays import Categorical -import pandas.core.common as com -from pandas.core.indexes.api import ( - Index, - MultiIndex, -) -from pandas.core.reshape.concat import concat -from pandas.core.reshape.util import tile_compat -from pandas.core.shared_docs import _shared_docs -from pandas.core.tools.numeric import to_numeric - -if TYPE_CHECKING: - from collections.abc import Hashable - - from pandas._typing import AnyArrayLike - - from pandas import DataFrame - - -@Appender(_shared_docs["melt"] % {"caller": "pd.melt(df, ", "other": "DataFrame.melt"}) -def melt( - frame: DataFrame, - id_vars=None, - value_vars=None, - var_name=None, - value_name: Hashable = "value", - col_level=None, - ignore_index: bool = True, -) -> DataFrame: - # If multiindex, gather names of columns on all level for checking presence - # of `id_vars` and `value_vars` - if isinstance(frame.columns, MultiIndex): - cols = [x for c in frame.columns for x in c] - else: - cols = list(frame.columns) - - if value_name in frame.columns: - raise ValueError( - f"value_name ({value_name}) cannot match an element in " - "the DataFrame columns." - ) - - if id_vars is not None: - if not is_list_like(id_vars): - id_vars = [id_vars] - elif isinstance(frame.columns, MultiIndex) and not isinstance(id_vars, list): - raise ValueError( - "id_vars must be a list of tuples when columns are a MultiIndex" - ) - else: - # Check that `id_vars` are in frame - id_vars = list(id_vars) - missing = Index(com.flatten(id_vars)).difference(cols) - if not missing.empty: - raise KeyError( - "The following 'id_vars' are not present " - f"in the DataFrame: {list(missing)}" - ) - else: - id_vars = [] - - if value_vars is not None: - if not is_list_like(value_vars): - value_vars = [value_vars] - elif isinstance(frame.columns, MultiIndex) and not isinstance(value_vars, list): - raise ValueError( - "value_vars must be a list of tuples when columns are a MultiIndex" - ) - else: - value_vars = list(value_vars) - # Check that `value_vars` are in frame - missing = Index(com.flatten(value_vars)).difference(cols) - if not missing.empty: - raise KeyError( - "The following 'value_vars' are not present in " - f"the DataFrame: {list(missing)}" - ) - if col_level is not None: - idx = frame.columns.get_level_values(col_level).get_indexer( - id_vars + value_vars - ) - else: - idx = algos.unique(frame.columns.get_indexer_for(id_vars + value_vars)) - frame = frame.iloc[:, idx] - else: - frame = frame.copy() - - if col_level is not None: # allow list or other? - # frame is a copy - frame.columns = frame.columns.get_level_values(col_level) - - if var_name is None: - if isinstance(frame.columns, MultiIndex): - if len(frame.columns.names) == len(set(frame.columns.names)): - var_name = frame.columns.names - else: - var_name = [f"variable_{i}" for i in range(len(frame.columns.names))] - else: - var_name = [ - frame.columns.name if frame.columns.name is not None else "variable" - ] - if isinstance(var_name, str): - var_name = [var_name] - - N, K = frame.shape - K -= len(id_vars) - - mdata: dict[Hashable, AnyArrayLike] = {} - for col in id_vars: - id_data = frame.pop(col) - if not isinstance(id_data.dtype, np.dtype): - # i.e. ExtensionDtype - if K > 0: - mdata[col] = concat([id_data] * K, ignore_index=True) - else: - # We can't concat empty list. (GH 46044) - mdata[col] = type(id_data)([], name=id_data.name, dtype=id_data.dtype) - else: - mdata[col] = np.tile(id_data._values, K) - - mcolumns = id_vars + var_name + [value_name] - - if frame.shape[1] > 0: - mdata[value_name] = concat( - [frame.iloc[:, i] for i in range(frame.shape[1])] - ).values - else: - mdata[value_name] = frame._values.ravel("F") - for i, col in enumerate(var_name): - mdata[col] = frame.columns._get_level_values(i).repeat(N) - - result = frame._constructor(mdata, columns=mcolumns) - - if not ignore_index: - result.index = tile_compat(frame.index, K) - - return result - - -def lreshape(data: DataFrame, groups, dropna: bool = True) -> DataFrame: - """ - Reshape wide-format data to long. Generalized inverse of DataFrame.pivot. - - Accepts a dictionary, ``groups``, in which each key is a new column name - and each value is a list of old column names that will be "melted" under - the new column name as part of the reshape. - - Parameters - ---------- - data : DataFrame - The wide-format DataFrame. - groups : dict - {new_name : list_of_columns}. - dropna : bool, default True - Do not include columns whose entries are all NaN. - - Returns - ------- - DataFrame - Reshaped DataFrame. - - See Also - -------- - melt : Unpivot a DataFrame from wide to long format, optionally leaving - identifiers set. - pivot : Create a spreadsheet-style pivot table as a DataFrame. - DataFrame.pivot : Pivot without aggregation that can handle - non-numeric data. - DataFrame.pivot_table : Generalization of pivot that can handle - duplicate values for one index/column pair. - DataFrame.unstack : Pivot based on the index values instead of a - column. - wide_to_long : Wide panel to long format. Less flexible but more - user-friendly than melt. - - Examples - -------- - >>> data = pd.DataFrame({'hr1': [514, 573], 'hr2': [545, 526], - ... 'team': ['Red Sox', 'Yankees'], - ... 'year1': [2007, 2007], 'year2': [2008, 2008]}) - >>> data - hr1 hr2 team year1 year2 - 0 514 545 Red Sox 2007 2008 - 1 573 526 Yankees 2007 2008 - - >>> pd.lreshape(data, {'year': ['year1', 'year2'], 'hr': ['hr1', 'hr2']}) - team year hr - 0 Red Sox 2007 514 - 1 Yankees 2007 573 - 2 Red Sox 2008 545 - 3 Yankees 2008 526 - """ - if isinstance(groups, dict): - keys = list(groups.keys()) - values = list(groups.values()) - else: - keys, values = zip(*groups) - - all_cols = list(set.union(*(set(x) for x in values))) - id_cols = list(data.columns.difference(all_cols)) - - K = len(values[0]) - - for seq in values: - if len(seq) != K: - raise ValueError("All column lists must be same length") - - mdata = {} - pivot_cols = [] - - for target, names in zip(keys, values): - to_concat = [data[col]._values for col in names] - - mdata[target] = concat_compat(to_concat) - pivot_cols.append(target) - - for col in id_cols: - mdata[col] = np.tile(data[col]._values, K) - - if dropna: - mask = np.ones(len(mdata[pivot_cols[0]]), dtype=bool) - for c in pivot_cols: - mask &= notna(mdata[c]) - if not mask.all(): - mdata = {k: v[mask] for k, v in mdata.items()} - - return data._constructor(mdata, columns=id_cols + pivot_cols) - - -def wide_to_long( - df: DataFrame, stubnames, i, j, sep: str = "", suffix: str = r"\d+" -) -> DataFrame: - r""" - Unpivot a DataFrame from wide to long format. - - Less flexible but more user-friendly than melt. - - With stubnames ['A', 'B'], this function expects to find one or more - group of columns with format - A-suffix1, A-suffix2,..., B-suffix1, B-suffix2,... - You specify what you want to call this suffix in the resulting long format - with `j` (for example `j='year'`) - - Each row of these wide variables are assumed to be uniquely identified by - `i` (can be a single column name or a list of column names) - - All remaining variables in the data frame are left intact. - - Parameters - ---------- - df : DataFrame - The wide-format DataFrame. - stubnames : str or list-like - The stub name(s). The wide format variables are assumed to - start with the stub names. - i : str or list-like - Column(s) to use as id variable(s). - j : str - The name of the sub-observation variable. What you wish to name your - suffix in the long format. - sep : str, default "" - A character indicating the separation of the variable names - in the wide format, to be stripped from the names in the long format. - For example, if your column names are A-suffix1, A-suffix2, you - can strip the hyphen by specifying `sep='-'`. - suffix : str, default '\\d+' - A regular expression capturing the wanted suffixes. '\\d+' captures - numeric suffixes. Suffixes with no numbers could be specified with the - negated character class '\\D+'. You can also further disambiguate - suffixes, for example, if your wide variables are of the form A-one, - B-two,.., and you have an unrelated column A-rating, you can ignore the - last one by specifying `suffix='(!?one|two)'`. When all suffixes are - numeric, they are cast to int64/float64. - - Returns - ------- - DataFrame - A DataFrame that contains each stub name as a variable, with new index - (i, j). - - See Also - -------- - melt : Unpivot a DataFrame from wide to long format, optionally leaving - identifiers set. - pivot : Create a spreadsheet-style pivot table as a DataFrame. - DataFrame.pivot : Pivot without aggregation that can handle - non-numeric data. - DataFrame.pivot_table : Generalization of pivot that can handle - duplicate values for one index/column pair. - DataFrame.unstack : Pivot based on the index values instead of a - column. - - Notes - ----- - All extra variables are left untouched. This simply uses - `pandas.melt` under the hood, but is hard-coded to "do the right thing" - in a typical case. - - Examples - -------- - >>> np.random.seed(123) - >>> df = pd.DataFrame({"A1970" : {0 : "a", 1 : "b", 2 : "c"}, - ... "A1980" : {0 : "d", 1 : "e", 2 : "f"}, - ... "B1970" : {0 : 2.5, 1 : 1.2, 2 : .7}, - ... "B1980" : {0 : 3.2, 1 : 1.3, 2 : .1}, - ... "X" : dict(zip(range(3), np.random.randn(3))) - ... }) - >>> df["id"] = df.index - >>> df - A1970 A1980 B1970 B1980 X id - 0 a d 2.5 3.2 -1.085631 0 - 1 b e 1.2 1.3 0.997345 1 - 2 c f 0.7 0.1 0.282978 2 - >>> pd.wide_to_long(df, ["A", "B"], i="id", j="year") - ... # doctest: +NORMALIZE_WHITESPACE - X A B - id year - 0 1970 -1.085631 a 2.5 - 1 1970 0.997345 b 1.2 - 2 1970 0.282978 c 0.7 - 0 1980 -1.085631 d 3.2 - 1 1980 0.997345 e 1.3 - 2 1980 0.282978 f 0.1 - - With multiple id columns - - >>> df = pd.DataFrame({ - ... 'famid': [1, 1, 1, 2, 2, 2, 3, 3, 3], - ... 'birth': [1, 2, 3, 1, 2, 3, 1, 2, 3], - ... 'ht1': [2.8, 2.9, 2.2, 2, 1.8, 1.9, 2.2, 2.3, 2.1], - ... 'ht2': [3.4, 3.8, 2.9, 3.2, 2.8, 2.4, 3.3, 3.4, 2.9] - ... }) - >>> df - famid birth ht1 ht2 - 0 1 1 2.8 3.4 - 1 1 2 2.9 3.8 - 2 1 3 2.2 2.9 - 3 2 1 2.0 3.2 - 4 2 2 1.8 2.8 - 5 2 3 1.9 2.4 - 6 3 1 2.2 3.3 - 7 3 2 2.3 3.4 - 8 3 3 2.1 2.9 - >>> l = pd.wide_to_long(df, stubnames='ht', i=['famid', 'birth'], j='age') - >>> l - ... # doctest: +NORMALIZE_WHITESPACE - ht - famid birth age - 1 1 1 2.8 - 2 3.4 - 2 1 2.9 - 2 3.8 - 3 1 2.2 - 2 2.9 - 2 1 1 2.0 - 2 3.2 - 2 1 1.8 - 2 2.8 - 3 1 1.9 - 2 2.4 - 3 1 1 2.2 - 2 3.3 - 2 1 2.3 - 2 3.4 - 3 1 2.1 - 2 2.9 - - Going from long back to wide just takes some creative use of `unstack` - - >>> w = l.unstack() - >>> w.columns = w.columns.map('{0[0]}{0[1]}'.format) - >>> w.reset_index() - famid birth ht1 ht2 - 0 1 1 2.8 3.4 - 1 1 2 2.9 3.8 - 2 1 3 2.2 2.9 - 3 2 1 2.0 3.2 - 4 2 2 1.8 2.8 - 5 2 3 1.9 2.4 - 6 3 1 2.2 3.3 - 7 3 2 2.3 3.4 - 8 3 3 2.1 2.9 - - Less wieldy column names are also handled - - >>> np.random.seed(0) - >>> df = pd.DataFrame({'A(weekly)-2010': np.random.rand(3), - ... 'A(weekly)-2011': np.random.rand(3), - ... 'B(weekly)-2010': np.random.rand(3), - ... 'B(weekly)-2011': np.random.rand(3), - ... 'X' : np.random.randint(3, size=3)}) - >>> df['id'] = df.index - >>> df # doctest: +NORMALIZE_WHITESPACE, +ELLIPSIS - A(weekly)-2010 A(weekly)-2011 B(weekly)-2010 B(weekly)-2011 X id - 0 0.548814 0.544883 0.437587 0.383442 0 0 - 1 0.715189 0.423655 0.891773 0.791725 1 1 - 2 0.602763 0.645894 0.963663 0.528895 1 2 - - >>> pd.wide_to_long(df, ['A(weekly)', 'B(weekly)'], i='id', - ... j='year', sep='-') - ... # doctest: +NORMALIZE_WHITESPACE - X A(weekly) B(weekly) - id year - 0 2010 0 0.548814 0.437587 - 1 2010 1 0.715189 0.891773 - 2 2010 1 0.602763 0.963663 - 0 2011 0 0.544883 0.383442 - 1 2011 1 0.423655 0.791725 - 2 2011 1 0.645894 0.528895 - - If we have many columns, we could also use a regex to find our - stubnames and pass that list on to wide_to_long - - >>> stubnames = sorted( - ... set([match[0] for match in df.columns.str.findall( - ... r'[A-B]\(.*\)').values if match != []]) - ... ) - >>> list(stubnames) - ['A(weekly)', 'B(weekly)'] - - All of the above examples have integers as suffixes. It is possible to - have non-integers as suffixes. - - >>> df = pd.DataFrame({ - ... 'famid': [1, 1, 1, 2, 2, 2, 3, 3, 3], - ... 'birth': [1, 2, 3, 1, 2, 3, 1, 2, 3], - ... 'ht_one': [2.8, 2.9, 2.2, 2, 1.8, 1.9, 2.2, 2.3, 2.1], - ... 'ht_two': [3.4, 3.8, 2.9, 3.2, 2.8, 2.4, 3.3, 3.4, 2.9] - ... }) - >>> df - famid birth ht_one ht_two - 0 1 1 2.8 3.4 - 1 1 2 2.9 3.8 - 2 1 3 2.2 2.9 - 3 2 1 2.0 3.2 - 4 2 2 1.8 2.8 - 5 2 3 1.9 2.4 - 6 3 1 2.2 3.3 - 7 3 2 2.3 3.4 - 8 3 3 2.1 2.9 - - >>> l = pd.wide_to_long(df, stubnames='ht', i=['famid', 'birth'], j='age', - ... sep='_', suffix=r'\w+') - >>> l - ... # doctest: +NORMALIZE_WHITESPACE - ht - famid birth age - 1 1 one 2.8 - two 3.4 - 2 one 2.9 - two 3.8 - 3 one 2.2 - two 2.9 - 2 1 one 2.0 - two 3.2 - 2 one 1.8 - two 2.8 - 3 one 1.9 - two 2.4 - 3 1 one 2.2 - two 3.3 - 2 one 2.3 - two 3.4 - 3 one 2.1 - two 2.9 - """ - - def get_var_names(df, stub: str, sep: str, suffix: str) -> list[str]: - regex = rf"^{re.escape(stub)}{re.escape(sep)}{suffix}$" - pattern = re.compile(regex) - return [col for col in df.columns if pattern.match(col)] - - def melt_stub(df, stub: str, i, j, value_vars, sep: str): - newdf = melt( - df, - id_vars=i, - value_vars=value_vars, - value_name=stub.rstrip(sep), - var_name=j, - ) - newdf[j] = Categorical(newdf[j]) - newdf[j] = newdf[j].str.replace(re.escape(stub + sep), "", regex=True) - - # GH17627 Cast numerics suffixes to int/float - newdf[j] = to_numeric(newdf[j], errors="ignore") - - return newdf.set_index(i + [j]) - - if not is_list_like(stubnames): - stubnames = [stubnames] - else: - stubnames = list(stubnames) - - if any(col in stubnames for col in df.columns): - raise ValueError("stubname can't be identical to a column name") - - if not is_list_like(i): - i = [i] - else: - i = list(i) - - if df[i].duplicated().any(): - raise ValueError("the id variables need to uniquely identify each row") - - value_vars = [get_var_names(df, stub, sep, suffix) for stub in stubnames] - - value_vars_flattened = [e for sublist in value_vars for e in sublist] - id_vars = list(set(df.columns.tolist()).difference(value_vars_flattened)) - - _melted = [melt_stub(df, s, i, j, v, sep) for s, v in zip(stubnames, value_vars)] - melted = _melted[0].join(_melted[1:], how="outer") - - if len(i) == 1: - new = df[id_vars].set_index(i).join(melted) - return new - - new = df[id_vars].merge(melted.reset_index(), on=i).set_index(i + [j]) - - return new diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_combine.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_combine.py deleted file mode 100644 index 75d47e3daa10339f4c4cc7b35c52f24bbb20277a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_combine.py +++ /dev/null @@ -1,17 +0,0 @@ -from pandas import Series -import pandas._testing as tm - - -class TestCombine: - def test_combine_scalar(self): - # GH#21248 - # Note - combine() with another Series is tested elsewhere because - # it is used when testing operators - ser = Series([i * 10 for i in range(5)]) - result = ser.combine(3, lambda x, y: x + y) - expected = Series([i * 10 + 3 for i in range(5)]) - tm.assert_series_equal(result, expected) - - result = ser.combine(22, lambda x, y: min(x, y)) - expected = Series([min(i * 10, 22) for i in range(5)]) - tm.assert_series_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/sphinxext.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/sphinxext.py deleted file mode 100644 index f935688f1ca66303ba186ffc123afeaa69489b42..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/sphinxext.py +++ /dev/null @@ -1,239 +0,0 @@ -""" - pygments.sphinxext - ~~~~~~~~~~~~~~~~~~ - - Sphinx extension to generate automatic documentation of lexers, - formatters and filters. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import sys - -from docutils import nodes -from docutils.statemachine import ViewList -from docutils.parsers.rst import Directive -from sphinx.util.nodes import nested_parse_with_titles - - -MODULEDOC = ''' -.. module:: %s - -%s -%s -''' - -LEXERDOC = ''' -.. class:: %s - - :Short names: %s - :Filenames: %s - :MIME types: %s - - %s - -''' - -FMTERDOC = ''' -.. class:: %s - - :Short names: %s - :Filenames: %s - - %s - -''' - -FILTERDOC = ''' -.. class:: %s - - :Name: %s - - %s - -''' - - -class PygmentsDoc(Directive): - """ - A directive to collect all lexers/formatters/filters and generate - autoclass directives for them. - """ - has_content = False - required_arguments = 1 - optional_arguments = 0 - final_argument_whitespace = False - option_spec = {} - - def run(self): - self.filenames = set() - if self.arguments[0] == 'lexers': - out = self.document_lexers() - elif self.arguments[0] == 'formatters': - out = self.document_formatters() - elif self.arguments[0] == 'filters': - out = self.document_filters() - elif self.arguments[0] == 'lexers_overview': - out = self.document_lexers_overview() - else: - raise Exception('invalid argument for "pygmentsdoc" directive') - node = nodes.compound() - vl = ViewList(out.split('\n'), source='') - nested_parse_with_titles(self.state, vl, node) - for fn in self.filenames: - self.state.document.settings.record_dependencies.add(fn) - return node.children - - def document_lexers_overview(self): - """Generate a tabular overview of all lexers. - - The columns are the lexer name, the extensions handled by this lexer - (or "None"), the aliases and a link to the lexer class.""" - from pygments.lexers._mapping import LEXERS - import pygments.lexers - out = [] - - table = [] - - def format_link(name, url): - if url: - return f'`{name} <{url}>`_' - return name - - for classname, data in sorted(LEXERS.items(), key=lambda x: x[1][1].lower()): - lexer_cls = pygments.lexers.find_lexer_class(data[1]) - extensions = lexer_cls.filenames + lexer_cls.alias_filenames - - table.append({ - 'name': format_link(data[1], lexer_cls.url), - 'extensions': ', '.join(extensions).replace('*', '\\*').replace('_', '\\') or 'None', - 'aliases': ', '.join(data[2]), - 'class': f'{data[0]}.{classname}' - }) - - column_names = ['name', 'extensions', 'aliases', 'class'] - column_lengths = [max([len(row[column]) for row in table if row[column]]) - for column in column_names] - - def write_row(*columns): - """Format a table row""" - out = [] - for l, c in zip(column_lengths, columns): - if c: - out.append(c.ljust(l)) - else: - out.append(' '*l) - - return ' '.join(out) - - def write_seperator(): - """Write a table separator row""" - sep = ['='*c for c in column_lengths] - return write_row(*sep) - - out.append(write_seperator()) - out.append(write_row('Name', 'Extension(s)', 'Short name(s)', 'Lexer class')) - out.append(write_seperator()) - for row in table: - out.append(write_row( - row['name'], - row['extensions'], - row['aliases'], - f':class:`~{row["class"]}`')) - out.append(write_seperator()) - - return '\n'.join(out) - - def document_lexers(self): - from pygments.lexers._mapping import LEXERS - import pygments - import inspect - import pathlib - - out = [] - modules = {} - moduledocstrings = {} - for classname, data in sorted(LEXERS.items(), key=lambda x: x[0]): - module = data[0] - mod = __import__(module, None, None, [classname]) - self.filenames.add(mod.__file__) - cls = getattr(mod, classname) - if not cls.__doc__: - print("Warning: %s does not have a docstring." % classname) - docstring = cls.__doc__ - if isinstance(docstring, bytes): - docstring = docstring.decode('utf8') - - example_file = getattr(cls, '_example', None) - if example_file: - p = pathlib.Path(inspect.getabsfile(pygments)).parent.parent /\ - 'tests' / 'examplefiles' / example_file - content = p.read_text(encoding='utf-8') - if not content: - raise Exception( - f"Empty example file '{example_file}' for lexer " - f"{classname}") - - if data[2]: - lexer_name = data[2][0] - docstring += '\n\n .. admonition:: Example\n' - docstring += f'\n .. code-block:: {lexer_name}\n\n' - for line in content.splitlines(): - docstring += f' {line}\n' - - modules.setdefault(module, []).append(( - classname, - ', '.join(data[2]) or 'None', - ', '.join(data[3]).replace('*', '\\*').replace('_', '\\') or 'None', - ', '.join(data[4]) or 'None', - docstring)) - if module not in moduledocstrings: - moddoc = mod.__doc__ - if isinstance(moddoc, bytes): - moddoc = moddoc.decode('utf8') - moduledocstrings[module] = moddoc - - for module, lexers in sorted(modules.items(), key=lambda x: x[0]): - if moduledocstrings[module] is None: - raise Exception("Missing docstring for %s" % (module,)) - heading = moduledocstrings[module].splitlines()[4].strip().rstrip('.') - out.append(MODULEDOC % (module, heading, '-'*len(heading))) - for data in lexers: - out.append(LEXERDOC % data) - - return ''.join(out) - - def document_formatters(self): - from pygments.formatters import FORMATTERS - - out = [] - for classname, data in sorted(FORMATTERS.items(), key=lambda x: x[0]): - module = data[0] - mod = __import__(module, None, None, [classname]) - self.filenames.add(mod.__file__) - cls = getattr(mod, classname) - docstring = cls.__doc__ - if isinstance(docstring, bytes): - docstring = docstring.decode('utf8') - heading = cls.__name__ - out.append(FMTERDOC % (heading, ', '.join(data[2]) or 'None', - ', '.join(data[3]).replace('*', '\\*') or 'None', - docstring)) - return ''.join(out) - - def document_filters(self): - from pygments.filters import FILTERS - - out = [] - for name, cls in FILTERS.items(): - self.filenames.add(sys.modules[cls.__module__].__file__) - docstring = cls.__doc__ - if isinstance(docstring, bytes): - docstring = docstring.decode('utf8') - out.append(FILTERDOC % (cls.__name__, name, docstring)) - return ''.join(out) - - -def setup(app): - app.add_directive('pygmentsdoc', PygmentsDoc) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/starlette/config.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/starlette/config.py deleted file mode 100644 index 795232cf642a3c7ca107fbeb47bb74f854ac82a4..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/starlette/config.py +++ /dev/null @@ -1,149 +0,0 @@ -import os -import typing -from collections.abc import MutableMapping -from pathlib import Path - - -class undefined: - pass - - -class EnvironError(Exception): - pass - - -class Environ(MutableMapping): - def __init__(self, environ: typing.MutableMapping = os.environ): - self._environ = environ - self._has_been_read: typing.Set[typing.Any] = set() - - def __getitem__(self, key: typing.Any) -> typing.Any: - self._has_been_read.add(key) - return self._environ.__getitem__(key) - - def __setitem__(self, key: typing.Any, value: typing.Any) -> None: - if key in self._has_been_read: - raise EnvironError( - f"Attempting to set environ['{key}'], but the value has already been " - "read." - ) - self._environ.__setitem__(key, value) - - def __delitem__(self, key: typing.Any) -> None: - if key in self._has_been_read: - raise EnvironError( - f"Attempting to delete environ['{key}'], but the value has already " - "been read." - ) - self._environ.__delitem__(key) - - def __iter__(self) -> typing.Iterator: - return iter(self._environ) - - def __len__(self) -> int: - return len(self._environ) - - -environ = Environ() - -T = typing.TypeVar("T") - - -class Config: - def __init__( - self, - env_file: typing.Optional[typing.Union[str, Path]] = None, - environ: typing.Mapping[str, str] = environ, - env_prefix: str = "", - ) -> None: - self.environ = environ - self.env_prefix = env_prefix - self.file_values: typing.Dict[str, str] = {} - if env_file is not None and os.path.isfile(env_file): - self.file_values = self._read_file(env_file) - - @typing.overload - def __call__(self, key: str, *, default: None) -> typing.Optional[str]: - ... - - @typing.overload - def __call__(self, key: str, cast: typing.Type[T], default: T = ...) -> T: - ... - - @typing.overload - def __call__( - self, key: str, cast: typing.Type[str] = ..., default: str = ... - ) -> str: - ... - - @typing.overload - def __call__( - self, - key: str, - cast: typing.Callable[[typing.Any], T] = ..., - default: typing.Any = ..., - ) -> T: - ... - - @typing.overload - def __call__( - self, key: str, cast: typing.Type[str] = ..., default: T = ... - ) -> typing.Union[T, str]: - ... - - def __call__( - self, - key: str, - cast: typing.Optional[typing.Callable] = None, - default: typing.Any = undefined, - ) -> typing.Any: - return self.get(key, cast, default) - - def get( - self, - key: str, - cast: typing.Optional[typing.Callable] = None, - default: typing.Any = undefined, - ) -> typing.Any: - key = self.env_prefix + key - if key in self.environ: - value = self.environ[key] - return self._perform_cast(key, value, cast) - if key in self.file_values: - value = self.file_values[key] - return self._perform_cast(key, value, cast) - if default is not undefined: - return self._perform_cast(key, default, cast) - raise KeyError(f"Config '{key}' is missing, and has no default.") - - def _read_file(self, file_name: typing.Union[str, Path]) -> typing.Dict[str, str]: - file_values: typing.Dict[str, str] = {} - with open(file_name) as input_file: - for line in input_file.readlines(): - line = line.strip() - if "=" in line and not line.startswith("#"): - key, value = line.split("=", 1) - key = key.strip() - value = value.strip().strip("\"'") - file_values[key] = value - return file_values - - def _perform_cast( - self, key: str, value: typing.Any, cast: typing.Optional[typing.Callable] = None - ) -> typing.Any: - if cast is None or value is None: - return value - elif cast is bool and isinstance(value, str): - mapping = {"true": True, "1": True, "false": False, "0": False} - value = value.lower() - if value not in mapping: - raise ValueError( - f"Config '{key}' has value '{value}'. Not a valid bool." - ) - return mapping[value] - try: - return cast(value) - except (TypeError, ValueError): - raise ValueError( - f"Config '{key}' has value '{value}'. Not a valid {cast.__name__}." - ) diff --git a/spaces/propilot/seo-powered-by-ia/cases/content_generation.py b/spaces/propilot/seo-powered-by-ia/cases/content_generation.py deleted file mode 100644 index c3ee06ac031fe3de59e3ee4f6c861a93ca043b0e..0000000000000000000000000000000000000000 --- a/spaces/propilot/seo-powered-by-ia/cases/content_generation.py +++ /dev/null @@ -1,96 +0,0 @@ -from .monitoring import HEADERS -import streamlit as st -from langchain import LLMChain -from langchain.chat_models import ChatOpenAI -from langchain.prompts.chat import ( - ChatPromptTemplate, - SystemMessagePromptTemplate, - HumanMessagePromptTemplate, -) - - -CREATIVE_TEMPLATE = """Debes actuar como un agente experto en SEO y Marketing Digital, y utilizando tus habilidades y conocimientos deberás proporcionar ideas innovadoras para la creación de contenido basado en las necesidades del usuario.\n -Analiza la descripción proporcionada y propone varias ideas de temas, ángulos o enfoques que podrían ser interesantes y atractivos para la audiencia, manteniendo siempre un enfoque de optimización SEO. -""" - -GENERATIVE_TEMPLATE = """Debes actuar como un agente experto en SEO y Marketing Digital, y utilizando tus habilidades y conocimientos deberás generar contenido basado en las ideas y necesidades del usuario optimizado para SEO.\n -1. Generar un esquema, esbozando la estructura del contenido a generar.\n -2. Escritura del contenido: Desarrolla cada sección de tu esquema en párrafos completos.\n -3. Inclusión de emojies: Los emojis pueden hacer que tu contenido sea más atractivo y amigable.\n -4. Optimización SEO: Asegúrate de que tus palabras clave aparecen en los lugares importantes de tu contenido.\n -5. Análisis: Debes analizar el contenido generado con el fin de identificar palabras claves que puedan optimizarse para SEO, puntos de mejora en la estructura y detalle del contenido. -""" - -def handle_seo_action(template, action, action_text, model, api_key, creativity_level=None): - if api_key: - if template: - with st.spinner(f'{action_text}...'): - if creativity_level: - return action(None, model, api_key, creativity_level, template) - return action(None, model, api_key, template) - else: - st.warning('Please enter some template to generate.') - else: - st.warning('Please enter your API Key.') - return None - - -def get_prompt_template(mode, include_emojis, tone, user_request): - base_template = CREATIVE_TEMPLATE if mode == "Lluvia de Ideas" else GENERATIVE_TEMPLATE - if include_emojis: - base_template += "\n\nIncluye emojis en tu contenido para hacerlo más atractivo." - if tone == 'Formal': - base_template += "\n\nEl tono del contenido debe ser formal." - - if user_request: - base_template += f"\n\nSolicitud del usuario: {user_request}" - - return base_template - -def display_content_generation(api_key, model): - st.title("Generación de Contenido Digital") - - # Agregamos las opciones personalizables - st.markdown("Personaliza tu contenido:") - mode = st.radio("Modo", ['Lluvia de Ideas', 'Generador de Contenido']) - include_emojis = st.checkbox("Incluir emojis") - tone = st.selectbox("Tono del contenido", ['Formal', 'Informal', 'Amigable', 'Profesional']) - - st.markdown("Selecciona el nivel de creatividad:") - creativity_level = st.slider("Nivel de Creatividad", min_value=0.0, max_value=1.0, value=0.5, step=0.1) - - # Allow the user to modify the template and enter user request - st.markdown("Modifica las instrucciones del bot si lo deseas:") - user_request = st.text_input("Solicitud del usuario") - _ = st.text_area("Previsualización de Solicitud", get_prompt_template(mode, include_emojis, tone, user_request), height=200) - - if st.button("Generar"): - # Combine template and user request - template_with_request = get_prompt_template(mode, include_emojis, tone, user_request) - - # Pass the template to handle_seo_action function - generated_content = handle_seo_action(template_with_request, content_generation_with_langchain, 'Creando el contenido optimizado para SEO', model, api_key, creativity_level) - - if generated_content: - st.success('Generación de contenido completada.') - st.markdown("**Contenido generado:**") - st.markdown(f"> {generated_content}") - - -def content_generation_with_langchain(content, model, openai_key, creativity_level, template): - chat = ChatOpenAI( - model=model, - temperature=creativity_level, - openai_api_key=openai_key, - headers=HEADERS if HEADERS else None, - ) - - system_message_prompt = SystemMessagePromptTemplate.from_template(template) - human_template = "{content}" - human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) - chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt]) - - chain = LLMChain(llm=chat, prompt=chat_prompt) - optimized_content = chain.run(content=content) - - return optimized_content diff --git a/spaces/pseudolab/Balanced-News-Reading/app.py b/spaces/pseudolab/Balanced-News-Reading/app.py deleted file mode 100644 index 0eb620589a945fc72d132e3182d52282e27ae31b..0000000000000000000000000000000000000000 --- a/spaces/pseudolab/Balanced-News-Reading/app.py +++ /dev/null @@ -1,117 +0,0 @@ -import gradio as gr -from newspaper import Article -from newspaper import Config - -from transformers import pipeline -import requests -from bs4 import BeautifulSoup -import re - -from bs4 import BeautifulSoup as bs -import requests -from transformers import PreTrainedTokenizerFast, BartForConditionalGeneration - -# Load Model and Tokenize -def get_summary(input_text): - tokenizer = PreTrainedTokenizerFast.from_pretrained("ainize/kobart-news") - summary_model = BartForConditionalGeneration.from_pretrained("ainize/kobart-news") - input_ids = tokenizer.encode(input_text, return_tensors="pt") - summary_text_ids = summary_model.generate( - input_ids=input_ids, - length_penalty=2, - top_p=0.9, - max_length=128, - min_length=12, - num_beams=2, - ) - # "task_specific_params": { - # "summarization": { - # "length_penalty": 1.0, - # "max_length": 128, - # "min_length": 12, - # "num_beams": 4 - # } - return tokenizer.decode(summary_text_ids[0], skip_special_tokens=True) - - - -USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Firefox/78.0' -config = Config() -config.browser_user_agent = USER_AGENT -config.request_timeout = 10 - -class news_collector: - def __init__(self): - self.examples_text = [] - - - def get_new_parser(self, url): - article = Article(url, language='ko') - article.download() - article.parse() - return article - - def get_news_links(self, page=''): - url = "https://news.daum.net/breakingnews/economic" - response = requests.get(url) - html_text = response.text - - soup = bs(response.text, 'html.parser') - news_titles = soup.select("a.link_txt") - links = [item.attrs['href'] for item in news_titles ] - https_links = [item for item in links if item.startswith('https') == True] - https_links - return https_links - - - def update_news_examples(self): - news_links = self.get_news_links() - - for news_url in news_links: - article = self.get_new_parser(news_url) - if article.text: - self.examples_text.append([get_summary(article.text[:1500]), news_url]) - - -title = "균형잡힌 뉴스 읽기 (Balanced News Reading)" - - - -with gr.Blocks(theme='pseudolab/huggingface-korea-theme') as demo: - - collector = news_collector() - collector.update_news_examples() - - with gr.Tab("소개"): - gr.Markdown( - """ - # 균형잡힌 뉴스 읽기 (Balanced News Reading) - - 긍정적인 기사와 부정적인 기사인지 확인하여 뉴스를 읽을 수 있습니다. 최근 경제뉴스기사를 가져와 Example에서 바로 확인할 수 있도록 구성했습니다. - - ## 1. 사용방법 - Daum뉴스의 경제 기사를 가져와 내용을 요약하고 `Example`에 가져옵니다. 감정 분석을 하고 싶은 기사를 `Examples`에서 선택해서 `Submit`을 누르면 `Classification`에 - 해당 기사의 감정 평가 결과가 표시됩니다. 감정평가는 각 상태의 확률 정보와 함께 `neutral`, `positive`, `negative` 3가지로 표시됩니다. - - ## 2. 구조 설명 - 뉴스기사를 크롤링 및 요약 모델을 이용한 기사 요약 >> 기사 요약정보 Example에 추가 >> 한국어 fine-tunning한 감정평가 모델을 이용해 입력된 기사에 대한 감정 평가 진행 - """) - - with gr.Tab("데모"): - Link_TXT = gr.Textbox(label="뉴스 내용", placeholder = "뉴스 기사 내용을 입력하세요.") - gr.load("models/gabrielyang/finance_news_classifier-KR_v7", - # gr.load("models/Hyeonseo/ko-finance_news_classifier", - inputs = Link_TXT) - Link_URL = gr.Textbox(label="뉴스 URL") - - # diable due to dynamic loading - # update_button = gr.Button(value="뉴스 데이터 업데이트") - # update_button.click(fn=collector.update_news_examples_and_update, inputs=None, outputs=None) - - gr.Examples( - collector.examples_text, - [Link_TXT, Link_URL], - ) - -if __name__ == "__main__": - demo.launch() \ No newline at end of file diff --git a/spaces/pycoming/bingo/src/lib/hooks/use-at-bottom.tsx b/spaces/pycoming/bingo/src/lib/hooks/use-at-bottom.tsx deleted file mode 100644 index d37c8cf4162adcb0064e08ecec24eb731416b045..0000000000000000000000000000000000000000 --- a/spaces/pycoming/bingo/src/lib/hooks/use-at-bottom.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import * as React from 'react' - -export function useAtBottom(offset = 0) { - const [isAtBottom, setIsAtBottom] = React.useState(false) - - React.useEffect(() => { - const handleScroll = () => { - setIsAtBottom( - window.innerHeight + window.scrollY >= - document.body.offsetHeight - offset - ) - } - - window.addEventListener('scroll', handleScroll, { passive: true }) - handleScroll() - - return () => { - window.removeEventListener('scroll', handleScroll) - } - }, [offset]) - - return isAtBottom -} diff --git a/spaces/pyodide-demo/self-hosted/attrs.js b/spaces/pyodide-demo/self-hosted/attrs.js deleted file mode 100644 index 5eae914be43c85d17088d41542d12d136b7540fe..0000000000000000000000000000000000000000 --- a/spaces/pyodide-demo/self-hosted/attrs.js +++ /dev/null @@ -1 +0,0 @@ -var Module=typeof globalThis.__pyodide_module!=="undefined"?globalThis.__pyodide_module:{};if(!Module.expectedDataFileDownloads){Module.expectedDataFileDownloads=0}Module.expectedDataFileDownloads++;(function(){var loadPackage=function(metadata){var PACKAGE_PATH="";if(typeof window==="object"){PACKAGE_PATH=window["encodeURIComponent"](window.location.pathname.toString().substring(0,window.location.pathname.toString().lastIndexOf("/"))+"/")}else if(typeof process==="undefined"&&typeof location!=="undefined"){PACKAGE_PATH=encodeURIComponent(location.pathname.toString().substring(0,location.pathname.toString().lastIndexOf("/"))+"/")}var PACKAGE_NAME="attrs.data";var REMOTE_PACKAGE_BASE="attrs.data";if(typeof Module["locateFilePackage"]==="function"&&!Module["locateFile"]){Module["locateFile"]=Module["locateFilePackage"];err("warning: you defined Module.locateFilePackage, that has been renamed to Module.locateFile (using your locateFilePackage for now)")}var REMOTE_PACKAGE_NAME=Module["locateFile"]?Module["locateFile"](REMOTE_PACKAGE_BASE,""):REMOTE_PACKAGE_BASE;var REMOTE_PACKAGE_SIZE=metadata["remote_package_size"];var PACKAGE_UUID=metadata["package_uuid"];function fetchRemotePackage(packageName,packageSize,callback,errback){if(typeof process==="object"){require("fs").readFile(packageName,(function(err,contents){if(err){errback(err)}else{callback(contents.buffer)}}));return}var xhr=new XMLHttpRequest;xhr.open("GET",packageName,true);xhr.responseType="arraybuffer";xhr.onprogress=function(event){var url=packageName;var size=packageSize;if(event.total)size=event.total;if(event.loaded){if(!xhr.addedTotal){xhr.addedTotal=true;if(!Module.dataFileDownloads)Module.dataFileDownloads={};Module.dataFileDownloads[url]={loaded:event.loaded,total:size}}else{Module.dataFileDownloads[url].loaded=event.loaded}var total=0;var loaded=0;var num=0;for(var download in Module.dataFileDownloads){var data=Module.dataFileDownloads[download];total+=data.total;loaded+=data.loaded;num++}total=Math.ceil(total*Module.expectedDataFileDownloads/num);if(Module["setStatus"])Module["setStatus"]("Downloading data... ("+loaded+"/"+total+")")}else if(!Module.dataFileDownloads){if(Module["setStatus"])Module["setStatus"]("Downloading data...")}};xhr.onerror=function(event){throw new Error("NetworkError for: "+packageName)};xhr.onload=function(event){if(xhr.status==200||xhr.status==304||xhr.status==206||xhr.status==0&&xhr.response){var packageData=xhr.response;callback(packageData)}else{throw new Error(xhr.statusText+" : "+xhr.responseURL)}};xhr.send(null)}function handleError(error){console.error("package error:",error)}var fetchedCallback=null;var fetched=Module["getPreloadedPackage"]?Module["getPreloadedPackage"](REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE):null;if(!fetched)fetchRemotePackage(REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE,(function(data){if(fetchedCallback){fetchedCallback(data);fetchedCallback=null}else{fetched=data}}),handleError);function runWithFS(){function assert(check,msg){if(!check)throw msg+(new Error).stack}Module["FS_createPath"]("/","lib",true,true);Module["FS_createPath"]("/lib","python3.9",true,true);Module["FS_createPath"]("/lib/python3.9","site-packages",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","attr",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","attrs",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","attrs-21.4.0-py3.9.egg-info",true,true);function processPackageData(arrayBuffer){assert(arrayBuffer,"Loading data file failed.");assert(arrayBuffer instanceof ArrayBuffer,"bad input to processPackageData");var byteArray=new Uint8Array(arrayBuffer);var curr;var compressedData={data:null,cachedOffset:111541,cachedIndexes:[-1,-1],cachedChunks:[null,null],offsets:[0,1249,2285,3610,4658,5847,7059,8387,9649,10741,11485,12460,13322,14509,15929,17397,18839,20147,21322,22729,23905,25165,26451,27850,28966,30052,30994,32111,33298,34556,35664,36511,37500,38555,39887,41325,42689,44074,45566,47020,48445,49662,50675,51790,52902,54012,54931,55915,57098,58192,59391,60378,61548,62580,63149,63742,64725,66038,67336,68421,69581,70428,71648,72920,74276,75404,76727,77930,79041,80331,81435,82588,83806,84956,86163,87443,88644,89860,91015,91936,92958,94162,95442,96898,97440,97985,98544,99352,100517,101523,102642,103543,104621,105724,107261,108760,110298,111274],sizes:[1249,1036,1325,1048,1189,1212,1328,1262,1092,744,975,862,1187,1420,1468,1442,1308,1175,1407,1176,1260,1286,1399,1116,1086,942,1117,1187,1258,1108,847,989,1055,1332,1438,1364,1385,1492,1454,1425,1217,1013,1115,1112,1110,919,984,1183,1094,1199,987,1170,1032,569,593,983,1313,1298,1085,1160,847,1220,1272,1356,1128,1323,1203,1111,1290,1104,1153,1218,1150,1207,1280,1201,1216,1155,921,1022,1204,1280,1456,542,545,559,808,1165,1006,1119,901,1078,1103,1537,1499,1538,976,267],successes:[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]};compressedData["data"]=byteArray;assert(typeof Module.LZ4==="object","LZ4 not present - was your app build with -s LZ4=1 ?");Module.LZ4.loadPackage({metadata:metadata,compressedData:compressedData},true);Module["removeRunDependency"]("datafile_attrs.data")}Module["addRunDependency"]("datafile_attrs.data");if(!Module.preloadResults)Module.preloadResults={};Module.preloadResults[PACKAGE_NAME]={fromCache:false};if(fetched){processPackageData(fetched);fetched=null}else{fetchedCallback=processPackageData}}if(Module["calledRun"]){runWithFS()}else{if(!Module["preRun"])Module["preRun"]=[];Module["preRun"].push(runWithFS)}};loadPackage({files:[{filename:"/lib/python3.9/site-packages/attr/__init__.py",start:0,end:1667,audio:0},{filename:"/lib/python3.9/site-packages/attr/_cmp.py",start:1667,end:5832,audio:0},{filename:"/lib/python3.9/site-packages/attr/_compat.py",start:5832,end:14228,audio:0},{filename:"/lib/python3.9/site-packages/attr/_config.py",start:14228,end:15120,audio:0},{filename:"/lib/python3.9/site-packages/attr/_funcs.py",start:15120,end:29873,audio:0},{filename:"/lib/python3.9/site-packages/attr/_make.py",start:29873,end:132609,audio:0},{filename:"/lib/python3.9/site-packages/attr/_next_gen.py",start:132609,end:138361,audio:0},{filename:"/lib/python3.9/site-packages/attr/_version_info.py",start:138361,end:140555,audio:0},{filename:"/lib/python3.9/site-packages/attr/converters.py",start:140555,end:144633,audio:0},{filename:"/lib/python3.9/site-packages/attr/exceptions.py",start:144633,end:146614,audio:0},{filename:"/lib/python3.9/site-packages/attr/filters.py",start:146614,end:147738,audio:0},{filename:"/lib/python3.9/site-packages/attr/setters.py",start:147738,end:149204,audio:0},{filename:"/lib/python3.9/site-packages/attr/validators.py",start:149204,end:165170,audio:0},{filename:"/lib/python3.9/site-packages/attr/__init__.pyi",start:165170,end:180270,audio:0},{filename:"/lib/python3.9/site-packages/attr/_cmp.pyi",start:180270,end:180587,audio:0},{filename:"/lib/python3.9/site-packages/attr/_version_info.pyi",start:180587,end:180796,audio:0},{filename:"/lib/python3.9/site-packages/attr/converters.pyi",start:180796,end:181212,audio:0},{filename:"/lib/python3.9/site-packages/attr/exceptions.pyi",start:181212,end:181751,audio:0},{filename:"/lib/python3.9/site-packages/attr/filters.pyi",start:181751,end:181966,audio:0},{filename:"/lib/python3.9/site-packages/attr/py.typed",start:181966,end:181966,audio:0},{filename:"/lib/python3.9/site-packages/attr/setters.pyi",start:181966,end:182539,audio:0},{filename:"/lib/python3.9/site-packages/attr/validators.pyi",start:182539,end:184807,audio:0},{filename:"/lib/python3.9/site-packages/attrs/__init__.py",start:184807,end:185916,audio:0},{filename:"/lib/python3.9/site-packages/attrs/converters.py",start:185916,end:185986,audio:0},{filename:"/lib/python3.9/site-packages/attrs/exceptions.py",start:185986,end:186056,audio:0},{filename:"/lib/python3.9/site-packages/attrs/filters.py",start:186056,end:186123,audio:0},{filename:"/lib/python3.9/site-packages/attrs/setters.py",start:186123,end:186190,audio:0},{filename:"/lib/python3.9/site-packages/attrs/validators.py",start:186190,end:186260,audio:0},{filename:"/lib/python3.9/site-packages/attrs/__init__.pyi",start:186260,end:188242,audio:0},{filename:"/lib/python3.9/site-packages/attrs/py.typed",start:188242,end:188242,audio:0},{filename:"/lib/python3.9/site-packages/attrs-21.4.0-py3.9.egg-info/PKG-INFO",start:188242,end:196284,audio:0},{filename:"/lib/python3.9/site-packages/attrs-21.4.0-py3.9.egg-info/SOURCES.txt",start:196284,end:198562,audio:0},{filename:"/lib/python3.9/site-packages/attrs-21.4.0-py3.9.egg-info/dependency_links.txt",start:198562,end:198563,audio:0},{filename:"/lib/python3.9/site-packages/attrs-21.4.0-py3.9.egg-info/not-zip-safe",start:198563,end:198564,audio:0},{filename:"/lib/python3.9/site-packages/attrs-21.4.0-py3.9.egg-info/requires.txt",start:198564,end:199195,audio:0},{filename:"/lib/python3.9/site-packages/attrs-21.4.0-py3.9.egg-info/top_level.txt",start:199195,end:199206,audio:0}],remote_package_size:115637,package_uuid:"f3fa16f6-57e0-45c5-8792-b9addacd1d36"})})(); \ No newline at end of file diff --git a/spaces/qwerrsc/vits-uma-genshin-honkai/attentions.py b/spaces/qwerrsc/vits-uma-genshin-honkai/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/qwerrsc/vits-uma-genshin-honkai/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/r3gm/AICoverGen/src/rmvpe.py b/spaces/r3gm/AICoverGen/src/rmvpe.py deleted file mode 100644 index 8d0d57297d4301e43a4fdcda216ae39c5e3b83b4..0000000000000000000000000000000000000000 --- a/spaces/r3gm/AICoverGen/src/rmvpe.py +++ /dev/null @@ -1,432 +0,0 @@ -import torch, numpy as np -import torch.nn as nn -import torch.nn.functional as F - - - -class BiGRU(nn.Module): - def __init__(self, input_features, hidden_features, num_layers): - super(BiGRU, self).__init__() - self.gru = nn.GRU( - input_features, - hidden_features, - num_layers=num_layers, - batch_first=True, - bidirectional=True, - ) - - def forward(self, x): - return self.gru(x)[0] - - -class ConvBlockRes(nn.Module): - def __init__(self, in_channels, out_channels, momentum=0.01): - super(ConvBlockRes, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - nn.Conv2d( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - if in_channels != out_channels: - self.shortcut = nn.Conv2d(in_channels, out_channels, (1, 1)) - self.is_shortcut = True - else: - self.is_shortcut = False - - def forward(self, x): - if self.is_shortcut: - return self.conv(x) + self.shortcut(x) - else: - return self.conv(x) + x - - -class Encoder(nn.Module): - def __init__( - self, - in_channels, - in_size, - n_encoders, - kernel_size, - n_blocks, - out_channels=16, - momentum=0.01, - ): - super(Encoder, self).__init__() - self.n_encoders = n_encoders - self.bn = nn.BatchNorm2d(in_channels, momentum=momentum) - self.layers = nn.ModuleList() - self.latent_channels = [] - for i in range(self.n_encoders): - self.layers.append( - ResEncoderBlock( - in_channels, out_channels, kernel_size, n_blocks, momentum=momentum - ) - ) - self.latent_channels.append([out_channels, in_size]) - in_channels = out_channels - out_channels *= 2 - in_size //= 2 - self.out_size = in_size - self.out_channel = out_channels - - def forward(self, x): - concat_tensors = [] - x = self.bn(x) - for i in range(self.n_encoders): - _, x = self.layers[i](x) - concat_tensors.append(_) - return x, concat_tensors - - -class ResEncoderBlock(nn.Module): - def __init__( - self, in_channels, out_channels, kernel_size, n_blocks=1, momentum=0.01 - ): - super(ResEncoderBlock, self).__init__() - self.n_blocks = n_blocks - self.conv = nn.ModuleList() - self.conv.append(ConvBlockRes(in_channels, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv.append(ConvBlockRes(out_channels, out_channels, momentum)) - self.kernel_size = kernel_size - if self.kernel_size is not None: - self.pool = nn.AvgPool2d(kernel_size=kernel_size) - - def forward(self, x): - for i in range(self.n_blocks): - x = self.conv[i](x) - if self.kernel_size is not None: - return x, self.pool(x) - else: - return x - - -class Intermediate(nn.Module): # - def __init__(self, in_channels, out_channels, n_inters, n_blocks, momentum=0.01): - super(Intermediate, self).__init__() - self.n_inters = n_inters - self.layers = nn.ModuleList() - self.layers.append( - ResEncoderBlock(in_channels, out_channels, None, n_blocks, momentum) - ) - for i in range(self.n_inters - 1): - self.layers.append( - ResEncoderBlock(out_channels, out_channels, None, n_blocks, momentum) - ) - - def forward(self, x): - for i in range(self.n_inters): - x = self.layers[i](x) - return x - - -class ResDecoderBlock(nn.Module): - def __init__(self, in_channels, out_channels, stride, n_blocks=1, momentum=0.01): - super(ResDecoderBlock, self).__init__() - out_padding = (0, 1) if stride == (1, 2) else (1, 1) - self.n_blocks = n_blocks - self.conv1 = nn.Sequential( - nn.ConvTranspose2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=stride, - padding=(1, 1), - output_padding=out_padding, - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - self.conv2 = nn.ModuleList() - self.conv2.append(ConvBlockRes(out_channels * 2, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv2.append(ConvBlockRes(out_channels, out_channels, momentum)) - - def forward(self, x, concat_tensor): - x = self.conv1(x) - x = torch.cat((x, concat_tensor), dim=1) - for i in range(self.n_blocks): - x = self.conv2[i](x) - return x - - -class Decoder(nn.Module): - def __init__(self, in_channels, n_decoders, stride, n_blocks, momentum=0.01): - super(Decoder, self).__init__() - self.layers = nn.ModuleList() - self.n_decoders = n_decoders - for i in range(self.n_decoders): - out_channels = in_channels // 2 - self.layers.append( - ResDecoderBlock(in_channels, out_channels, stride, n_blocks, momentum) - ) - in_channels = out_channels - - def forward(self, x, concat_tensors): - for i in range(self.n_decoders): - x = self.layers[i](x, concat_tensors[-1 - i]) - return x - - -class DeepUnet(nn.Module): - def __init__( - self, - kernel_size, - n_blocks, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(DeepUnet, self).__init__() - self.encoder = Encoder( - in_channels, 128, en_de_layers, kernel_size, n_blocks, en_out_channels - ) - self.intermediate = Intermediate( - self.encoder.out_channel // 2, - self.encoder.out_channel, - inter_layers, - n_blocks, - ) - self.decoder = Decoder( - self.encoder.out_channel, en_de_layers, kernel_size, n_blocks - ) - - def forward(self, x): - x, concat_tensors = self.encoder(x) - x = self.intermediate(x) - x = self.decoder(x, concat_tensors) - return x - - -class E2E(nn.Module): - def __init__( - self, - n_blocks, - n_gru, - kernel_size, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(E2E, self).__init__() - self.unet = DeepUnet( - kernel_size, - n_blocks, - en_de_layers, - inter_layers, - in_channels, - en_out_channels, - ) - self.cnn = nn.Conv2d(en_out_channels, 3, (3, 3), padding=(1, 1)) - if n_gru: - self.fc = nn.Sequential( - BiGRU(3 * 128, 256, n_gru), - nn.Linear(512, 360), - nn.Dropout(0.25), - nn.Sigmoid(), - ) - else: - self.fc = nn.Sequential( - nn.Linear(3 * nn.N_MELS, nn.N_CLASS), nn.Dropout(0.25), nn.Sigmoid() - ) - - def forward(self, mel): - mel = mel.transpose(-1, -2).unsqueeze(1) - x = self.cnn(self.unet(mel)).transpose(1, 2).flatten(-2) - x = self.fc(x) - return x - - -from librosa.filters import mel - - -class MelSpectrogram(torch.nn.Module): - def __init__( - self, - is_half, - n_mel_channels, - sampling_rate, - win_length, - hop_length, - n_fft=None, - mel_fmin=0, - mel_fmax=None, - clamp=1e-5, - ): - super().__init__() - n_fft = win_length if n_fft is None else n_fft - self.hann_window = {} - mel_basis = mel( - sr=sampling_rate, - n_fft=n_fft, - n_mels=n_mel_channels, - fmin=mel_fmin, - fmax=mel_fmax, - htk=True, - ) - mel_basis = torch.from_numpy(mel_basis).float() - self.register_buffer("mel_basis", mel_basis) - self.n_fft = win_length if n_fft is None else n_fft - self.hop_length = hop_length - self.win_length = win_length - self.sampling_rate = sampling_rate - self.n_mel_channels = n_mel_channels - self.clamp = clamp - self.is_half = is_half - - def forward(self, audio, keyshift=0, speed=1, center=True): - factor = 2 ** (keyshift / 12) - n_fft_new = int(np.round(self.n_fft * factor)) - win_length_new = int(np.round(self.win_length * factor)) - hop_length_new = int(np.round(self.hop_length * speed)) - keyshift_key = str(keyshift) + "_" + str(audio.device) - if keyshift_key not in self.hann_window: - self.hann_window[keyshift_key] = torch.hann_window(win_length_new).to( - audio.device - ) - fft = torch.stft( - audio, - n_fft=n_fft_new, - hop_length=hop_length_new, - win_length=win_length_new, - window=self.hann_window[keyshift_key], - center=center, - return_complex=True, - ) - magnitude = torch.sqrt(fft.real.pow(2) + fft.imag.pow(2)) - if keyshift != 0: - size = self.n_fft // 2 + 1 - resize = magnitude.size(1) - if resize < size: - magnitude = F.pad(magnitude, (0, 0, 0, size - resize)) - magnitude = magnitude[:, :size, :] * self.win_length / win_length_new - mel_output = torch.matmul(self.mel_basis, magnitude) - if self.is_half == True: - mel_output = mel_output.half() - log_mel_spec = torch.log(torch.clamp(mel_output, min=self.clamp)) - return log_mel_spec - - -class RMVPE: - def __init__(self, model_path, is_half, device=None): - self.resample_kernel = {} - model = E2E(4, 1, (2, 2)) - ckpt = torch.load(model_path, map_location="cpu") - model.load_state_dict(ckpt) - model.eval() - if is_half == True: - model = model.half() - self.model = model - self.resample_kernel = {} - self.is_half = is_half - if device is None: - device = "cuda" if torch.cuda.is_available() else "cpu" - self.device = device - self.mel_extractor = MelSpectrogram( - is_half, 128, 16000, 1024, 160, None, 30, 8000 - ).to(device) - self.model = self.model.to(device) - cents_mapping = 20 * np.arange(360) + 1997.3794084376191 - self.cents_mapping = np.pad(cents_mapping, (4, 4)) # 368 - - def mel2hidden(self, mel): - with torch.no_grad(): - n_frames = mel.shape[-1] - mel = F.pad( - mel, (0, 32 * ((n_frames - 1) // 32 + 1) - n_frames), mode="reflect" - ) - hidden = self.model(mel) - return hidden[:, :n_frames] - - def decode(self, hidden, thred=0.03): - cents_pred = self.to_local_average_cents(hidden, thred=thred) - f0 = 10 * (2 ** (cents_pred / 1200)) - f0[f0 == 10] = 0 - # f0 = np.array([10 * (2 ** (cent_pred / 1200)) if cent_pred else 0 for cent_pred in cents_pred]) - return f0 - - def infer_from_audio(self, audio, thred=0.03): - audio = torch.from_numpy(audio).float().to(self.device).unsqueeze(0) - # torch.cuda.synchronize() - # t0=ttime() - mel = self.mel_extractor(audio, center=True) - # torch.cuda.synchronize() - # t1=ttime() - hidden = self.mel2hidden(mel) - # torch.cuda.synchronize() - # t2=ttime() - hidden = hidden.squeeze(0).cpu().numpy() - if self.is_half == True: - hidden = hidden.astype("float32") - f0 = self.decode(hidden, thred=thred) - # torch.cuda.synchronize() - # t3=ttime() - # print("hmvpe:%s\t%s\t%s\t%s"%(t1-t0,t2-t1,t3-t2,t3-t0)) - return f0 - - def to_local_average_cents(self, salience, thred=0.05): - # t0 = ttime() - center = np.argmax(salience, axis=1) # frame length#index - salience = np.pad(salience, ((0, 0), (4, 4))) # frame length,368 - # t1 = ttime() - center += 4 - todo_salience = [] - todo_cents_mapping = [] - starts = center - 4 - ends = center + 5 - for idx in range(salience.shape[0]): - todo_salience.append(salience[:, starts[idx] : ends[idx]][idx]) - todo_cents_mapping.append(self.cents_mapping[starts[idx] : ends[idx]]) - # t2 = ttime() - todo_salience = np.array(todo_salience) # frame length,9 - todo_cents_mapping = np.array(todo_cents_mapping) # frame length,9 - product_sum = np.sum(todo_salience * todo_cents_mapping, 1) - weight_sum = np.sum(todo_salience, 1) # frame length - devided = product_sum / weight_sum # frame length - # t3 = ttime() - maxx = np.max(salience, axis=1) # frame length - devided[maxx <= thred] = 0 - # t4 = ttime() - # print("decode:%s\t%s\t%s\t%s" % (t1 - t0, t2 - t1, t3 - t2, t4 - t3)) - return devided - - -# if __name__ == '__main__': -# audio, sampling_rate = sf.read("Quotations~1.wav") ### edit -# if len(audio.shape) > 1: -# audio = librosa.to_mono(audio.transpose(1, 0)) -# audio_bak = audio.copy() -# if sampling_rate != 16000: -# audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) -# model_path = "/bili-coeus/jupyter/jupyterhub-liujing04/vits_ch/test-RMVPE/weights/rmvpe_llc_half.pt" -# thred = 0.03 # 0.01 -# device = 'cuda' if torch.cuda.is_available() else 'cpu' -# rmvpe = RMVPE(model_path,is_half=False, device=device) -# t0=ttime() -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# t1=ttime() -# print(f0.shape,t1-t0) diff --git a/spaces/r3gm/RVC_HF/lib/infer_pack/commons.py b/spaces/r3gm/RVC_HF/lib/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/r3gm/RVC_HF/lib/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/raedeXanto/academic-chatgpt-beta/CandyDoll Elizabeta S Set 010jpg.md b/spaces/raedeXanto/academic-chatgpt-beta/CandyDoll Elizabeta S Set 010jpg.md deleted file mode 100644 index ea8edda4b318b3bb95cc140dc500ca09ccda44d0..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/CandyDoll Elizabeta S Set 010jpg.md +++ /dev/null @@ -1,99 +0,0 @@ - -

                CandyDoll Elizabeta S Set 010jpg: A Review of the Photo Collection

                -

                If you are a fan of young European models, you may have heard of CandyDoll, a Japanese publisher and brand that produces photo books and videos of them. One of their popular models is Elizabeta S, a blonde beauty with blue eyes and a sweet smile. In this article, we will review one of her photo collections, CandyDoll Elizabeta S Set 010jpg, and see if it is worth buying.

                -

                CandyDoll Elizabeta S Set 010jpg


                Download File ===== https://tinourl.com/2uL1sE



                -

                Introduction

                -

                CandyDoll is a website that offers monthly subscriptions to access exclusive content of young European models. The website claims that all models are over 18 years old and have consented to participate in the photo shoots. The website also states that all content is legal and complies with Japanese laws.

                -

                Elizabeta S is one of the models featured on CandyDoll. She is from Russia and has been modeling since she was 14 years old. She has appeared in several photo books and videos for CandyDoll, as well as other brands such as Silver Stars and Fashion Land. She is known for her cute and innocent look, as well as her slender figure and long legs.

                -

                CandyDoll Elizabeta S Set 010jpg is one of her photo collections that was released in November 2021. It contains 100 photos of Elizabeta S in various outfits and poses. The theme of the collection is leotard and swimsuit, which showcase her curves and charm. The style of the collection is playful and cheerful, with bright colors and fun props.

                -

                The main features and benefits of buying this photo collection are:

                -
                  -
                • You can enjoy high-quality photos of Elizabeta S in different scenarios.
                • -
                • You can admire her beauty and personality in every photo.
                • -
                • You can support her modeling career and help her grow as an artist.
                • -
                • You can add this collection to your personal library or share it with your friends.
                • -
                -

                Content and Quality

                -

                The photo collection consists of 100 photos in JPG format. The size of each photo is about 3 MB, which means they have high resolution and clarity. The dimensions of each photo are about 1800 x 1200 pixels, which means they can fit most screens and devices.

                -

                CandyDoll Elizabeta S leotard
                -CandyDoll Elizabeta S swimsuit
                -CandyDoll Elizabeta S dress
                -CandyDoll Elizabeta S model
                -CandyDoll Elizabeta S young
                -CandyDoll Elizabeta S teen
                -CandyDoll Elizabeta S girl
                -CandyDoll Elizabeta S pretty
                -CandyDoll Elizabeta S whip
                -CandyDoll Elizabeta S photo collection
                -CandyDoll Elizabeta S high-quality images
                -CandyDoll Elizabeta S Japanese company
                -CandyDoll Elizabeta S leotardmodel
                -CandyDoll Elizabeta S swimsuitmodel
                -CandyDoll Elizabeta S youngmodel
                -CandyDoll Elizabeta S girlmodel
                -CandyDoll Elizabeta S teenmodel
                -CandyDoll Elizabeta S prettymodel
                -CandyDoll Elizabeta S candydolltv
                -CandyDoll Elizabeta S teenage
                -CandyDoll Elizabeta S petite
                -CandyDoll Elizabeta S tween
                -Historidecade candydoll elizabeta s
                -Twitter candydoll elizabeta s
                -Soundcloud candydoll elizabeta s
                -Feipoicircgreas1985 candydoll elizabeta s
                -Retweets candydoll elizabeta s
                -Likes candydoll elizabeta s
                -Bookmarks candydoll elizabeta s
                -Karl3.2.3 candydoll elizabeta s
                -Hakan_marmaris candydoll elizabeta s
                -Guillermo Trébol candydoll elizabeta s
                -FigasLeonardo candydoll elizabeta s
                -Süper candydoll elizabeta s
                -Cute candydoll elizabeta s
                -Nov 2 2021 candydoll elizabeta s
                -Jan 5 2022 candydoll elizabeta s
                -Jul 7 2022 candydoll elizabeta s
                -Jul 18 2022 candydoll elizabeta s
                -Nov 6 2022 candydoll elizabeta s

                -

                The lighting, color, contrast, and sharpness of the photos are excellent. The photos are well-lit and have vibrant colors that match the mood of each scene. The contrast between Elizabeta S's skin tone and her outfits is striking and appealing. The sharpness of the photos is crisp and detailed, which allows you to see every feature of her face and body.

                -

                The posing, expression, and outfit of Elizabeta S in the photos are stunning. She poses confidently and naturally in front of the camera, showing off her flexibility and grace. She smiles warmly and innocently in most photos, but also makes some seductive and teasing expressions in others. She wears various leotards and swimsuits that accentuate her curves and charm. Some examples are:

                - - - -
                Elizabeta S wearing a pink leotard with white polka dotsElizabeta S wearing a blue swimsuit with yellow stripes
                Elizabeta S wearing a purple leotard with white flowersElizabeta S wearing a green swimsuit with white stars
                -

                The background, setting, and props of the photos are creative and fun. The photos were taken in various locations such as a studio, a park, a beach, a pool, etc. The backgrounds are colorful and lively, creating a contrast with Elizabeta S's outfits. The settings are realistic and appropriate for each scene. The props are cute and amusing, such as balloons, bubbles, flowers, etc.

                -

                Price and Availability

                -

                The price of the photo collection is $29.99 USD (or equivalent currency). You can purchase it online through CandyDoll's website using PayPal or credit card. You will receive an email with a download link after completing your payment. You can download the photo collection as a ZIP file to your computer or device.

                -

                There is currently no discount or bundle offer for this photo collection. However, you can save money by subscribing to CandyDoll's monthly plan for $49.99 USD (or equivalent currency). This will give you access to all content on their website, including new releases every month.

                -

                There is also no refund or exchange policy for this photo collection. Once you purchase it, you cannot cancel or return it. You also cannot share or resell it to others without violating CandyDoll's terms of service.

                -

                Pros and Cons

                -

                Before you decide whether to buy this photo collection or not, you should weigh its pros and cons carefully. Here are some points to consider:

                -

                Pros

                -
                  -
                • The photo collection has high-quality photos that capture Elizabeta S's beauty and personality.
                • -
                • The photo collection has diverse content that showcases Elizabeta S's versatility as a model.
                • -
                • The photo collection has a reasonable price that reflects its value as a product.
                • -
                • The photo collection has an easy purchase process that ensures your convenience as a customer.
                • -
                -

                Cons

                -
                  -
                • The photo collection may not be suitable for everyone's taste or preference.
                • -
                • The photo collection may not be legal or ethical in some countries or regions.
                • -
                • The photo collection may not be safe or secure to download or store on your computer or device.
                • -
                • The photo collection may not have any customer service or support if you encounter any problems or issues.
                • -
                -

                Conclusion

                -

                In conclusion, CandyDoll Elizabeta S Set 010jpg is a photo collection that features Elizabeta S in various leotards and swimsuits. It has high-quality photos that display her beauty and personality in different scenarios. It has diverse content that shows her versatility as a model. It has a reasonable price that reflects its value as a product. It has an easy purchase process that ensures your convenience as a customer. However, it also has some drawbacks that you should be aware of before buying it. It may not be suitable for everyone's taste or preference. It may not be legal or ethical in some countries or regions. It may not be safe or secure to download or store on your computer encounter any problems or issues. Therefore, you should carefully consider the pros and cons of this photo collection before making your final decision. If you are a fan of Elizabeta S and CandyDoll, you may find this photo collection worth buying. If you are not, you may want to look for other options that suit your needs and expectations better. We hope this review has been helpful and informative for you. If you have any questions or opinions about this photo collection, please feel free to leave a comment below. We would love to hear from you and see what you think.

                FAQs

                -

                Q: How old is Elizabeta S?

                -

                A: According to CandyDoll's website, Elizabeta S is 18 years old as of 2021. However, some sources claim that she is older or younger than that. We cannot verify her exact age or birthday.

                -

                Q: Where can I find more photos or videos of Elizabeta S?

                -

                A: You can find more photos or videos of Elizabeta S on CandyDoll's website, as well as other websites such as Silver Stars and Fashion Land. You can also follow her on social media platforms such as Twitter and Instagram.

                -

                Q: Is CandyDoll legal and ethical?

                -

                A: CandyDoll claims that all its content is legal and ethical, and that all models are over 18 years old and have consented to participate in the photo shoots. However, some countries or regions may have different laws or standards regarding the production and distribution of such content. Therefore, you should check your local regulations before buying or viewing any CandyDoll products.

                -

                Q: How can I contact CandyDoll or Elizabeta S?

                -

                A: You can contact CandyDoll through their email address: info@candydoll.tv. You can also contact Elizabeta S through her social media accounts or her fan club.

                -

                Q: What are some other similar products or brands to CandyDoll?

                -

                A: Some other similar products or brands to CandyDoll are Silver Stars, Fashion Land, New Star, Teen Models Club, etc. They also feature young European models in various outfits and poses.

                -

                0a6ba089eb
                -
                -
                \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Corel PaintShop Pro 2020 Ultimate 22.2.0.8 Keygen Unlock the Ultimate Features of the Professional Photo Software.md b/spaces/raedeXanto/academic-chatgpt-beta/Corel PaintShop Pro 2020 Ultimate 22.2.0.8 Keygen Unlock the Ultimate Features of the Professional Photo Software.md deleted file mode 100644 index 7708a8ba58df4fd37867e44baa100954e6f5946c..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Corel PaintShop Pro 2020 Ultimate 22.2.0.8 Keygen Unlock the Ultimate Features of the Professional Photo Software.md +++ /dev/null @@ -1,116 +0,0 @@ - -

                Corel PaintShop Pro 2020 Ultimate 22.2.0.8 Keygen: A Comprehensive Review

                -

                If you are looking for a powerful and versatile photo editing and graphic design software, you might have heard of Corel PaintShop Pro 2020 Ultimate. This software is one of the best alternatives to Adobe Photoshop, offering a wide range of features and tools to help you create stunning images and designs. But what is Corel PaintShop Pro 2020 Ultimate exactly, and how can you get it for free with a keygen? In this article, we will answer these questions and more, giving you a comprehensive review of this amazing software.

                -

                Corel PaintShop Pro 2020 Ultimate 22.2.0.8 Keygen


                Download Zip >>> https://tinourl.com/2uL43M



                -

                Introduction

                -

                What is Corel PaintShop Pro 2020 Ultimate?

                -

                Corel PaintShop Pro 2020 Ultimate is the latest version of Corel's flagship photo editing and graphic design software. It is designed for both beginners and professionals, offering a user-friendly interface and a rich set of features to suit any creative project. Whether you want to enhance your photos, create stunning graphics, or design logos, flyers, posters, or web pages, Corel PaintShop Pro 2020 Ultimate can help you achieve your goals.

                -

                What is a keygen and why do you need it?

                -

                A keygen is a software that generates serial numbers or activation codes for other software. You need a keygen to activate Corel PaintShop Pro 2020 Ultimate because it is a paid software that requires a valid license to use. However, with a keygen, you can bypass the license verification process and use the software for free without any limitations or restrictions.

                -

                What are the benefits of using Corel PaintShop Pro 2020 Ultimate?

                -

                There are many benefits of using Corel PaintShop Pro 2020 Ultimate, such as:

                -
                  -
                • You can edit and enhance your photos with advanced tools such as AI-powered noise removal, HDR effects, lens correction, color grading, and more.
                • -
                • You can create stunning graphics with tools such as text and typography, vector drawing and editing, brushes and gradients, and more.
                • -
                • You can access bonus software and content that are included in the Ultimate edition, such as GRFX Studio, Parallels Toolbox, PhotoMirage Express, Painter Essentials 6, and AfterShot 3.
                • -
                • You can save money by using a keygen to activate the software for free instead of paying for a subscription or a one-time purchase.
                • -
                -

                Features of Corel PaintShop Pro 2020 Ultimate

                -

                Photo editing tools

                -

                Corel PaintShop Pro 2020 Ultimate offers a comprehensive set of photo editing tools that allow you to edit your photos in any way you want. Some of the photo editing tools are:

                -

                AI-powered tools

                -

                Corel PaintShop Pro 2020 Ultimate uses artificial intelligence to help you improve your photos with ease. For example, you can use the AI Upsampling tool to enlarge your photos without losing quality, the AI Denoise tool to remove noise from your photos without blurring details, the AI Artifact Removal tool to remove compression artifacts from JPEG images without affecting colors or sharpness, and the AI Style Transfer tool to apply artistic styles to your photos with one click.

                -

                Creative presets and filters

                -

                Corel PaintShop Pro 2020 Ultimate also provides you with creative presets and filters that let you add various effects to your photos with one click. For example, you can use the Instant Effects palette to apply different categories of effects such as artistic, film styles, landscape, portrait, retro, traditional, and more. You can also use the Pic-to-Painting presets to transform your photos into paintings with different styles such as impressionist, modernist, illustration, watercolor, colored pencil, pastel sketch, oil painting, and more.

                -

                Layer and mask support

                -

                Corel PaintShop Pro 2020 Ultimate also supports layers and masks that enable you to work with multiple images or elements on separate layers. You can use layers to combine images or elements in different ways such as blending modes, opacity levels, grouping, and more. You can also use masks to hide or reveal parts of an image or element on a layer. You can use different types of masks such as adjustment masks, clipping masks, gradient masks, and more.

                -

                Graphic design tools

                -

                Besides photo editing tools, Corel PaintShop Pro 2020 Ultimate also offers graphic design tools that allow you to create stunning graphics for various purposes. Some of the graphic design tools are:

                -

                How to activate Corel PaintShop Pro 2020 Ultimate with keygen
                -Corel PaintShop Pro 2020 Ultimate 22.2.0.8 crack download
                -Corel PaintShop Pro 2020 Ultimate serial number generator
                -Corel PaintShop Pro 2020 Ultimate full version free download
                -Corel PaintShop Pro 2020 Ultimate patch + keygen
                -Corel PaintShop Pro 2020 Ultimate license key activation
                -Corel PaintShop Pro 2020 Ultimate torrent link
                -Corel PaintShop Pro 2020 Ultimate review and features
                -Corel PaintShop Pro 2020 Ultimate system requirements and compatibility
                -Corel PaintShop Pro 2020 Ultimate installation guide and tutorial
                -Corel PaintShop Pro 2020 Ultimate best price and discount
                -Corel PaintShop Pro 2020 Ultimate alternative and comparison
                -Corel PaintShop Pro 2020 Ultimate tips and tricks
                -Corel PaintShop Pro 2020 Ultimate support and customer service
                -Corel PaintShop Pro 2020 Ultimate update and upgrade
                -Corel PaintShop Pro 2020 Ultimate bonus content and plugins
                -Corel PaintShop Pro 2020 Ultimate user manual and documentation
                -Corel PaintShop Pro 2020 Ultimate online course and training
                -Corel PaintShop Pro 2020 Ultimate video editing and photo editing software
                -Corel PaintShop Pro 2020 Ultimate vs Adobe Photoshop CC 2021
                -Corel PaintShop Pro 2020 Ultimate vs Affinity Photo
                -Corel PaintShop Pro 2020 Ultimate vs GIMP
                -Corel PaintShop Pro 2020 Ultimate vs Lightroom Classic
                -Corel PaintShop Pro 2020 Ultimate vs Luminar AI
                -Corel PaintShop Pro 2020 Ultimate vs ON1 Photo RAW
                -Corel PaintShop Pro 2020 Ultimate vs PhotoDirector Ultra
                -Corel PaintShop Pro 2020 Ultimate vs PhotoScape X
                -Corel PaintShop Pro 2020 Ultimate vs Pixelmator Pro
                -Corel PaintShop Pro 2020 Ultimate vs Skylum Aurora HDR
                -Corel PaintShop Pro 2020 Ultimate vs Snapseed
                -Benefits of using Corel PaintShop Pro 2020 Ultimate keygen
                -Risks of using Corel PaintShop Pro 2020 Ultimate keygen
                -How to avoid malware and viruses from Corel PaintShop Pro 2020 Ultimate keygen
                -How to fix errors and issues from Corel PaintShop Pro 2020 Ultimate keygen
                -How to uninstall and remove Corel PaintShop Pro 2020 Ultimate keygen
                -How to backup and restore Corel PaintShop Pro 2020 Ultimate keygen
                -How to transfer and migrate Corel PaintShop Pro 2020 Ultimate keygen to another device
                -How to share and collaborate with Corel PaintShop Pro 2020 Ultimate keygen
                -How to optimize and improve performance of Corel PaintShop Pro 2020 Ultimate keygen
                -How to customize and personalize Corel PaintShop Pro 2020 Ultimate keygen settings and preferences

                -

                Text and typography tools

                -

                Corel PaintShop Pro 2020 Ultimate allows you to add text to your images or designs with various options such as fonts, sizes, styles, colors, alignments, spacing, and more. You can also use typography tools such as kerning, leading, tracking, and more. You can also create text effects such as drop shadows, glows, outlines, and more.

                -

                Vector drawing and editing tools

                -

                Corel PaintShop Pro 2020 Ultimate also lets you draw and edit vector graphics with precision and ease. You can use vector drawing tools such as pen, shape, curve, and more. You can also use vector editing tools such as node editing, transforming, aligning, distributing, and more. You can also convert raster images into vector graphics with the Smart Carver tool.

                -

                Brushes and gradients tools

                -

                Corel PaintShop Pro 2020 Ultimate also gives you access to brushes and gradients tools that allow you to add color and texture to your images or designs. You can use brushes tools such as paintbrush, airbrush, eraser, clone brush, and more. You can also use gradients tools such as linear gradient, radial gradient, conical gradient, and more. You can also create custom brushes and gradients with your own images or colors.

                -

                Bonus software and content

                - and content are:

                -

                GRFX Studio

                -

                GRFX Studio is a powerful photo editing software that allows you to apply thousands of photo effects and filters to your images with one click. You can also use GRFX Studio to adjust your photos with tools such as crop, rotate, resize, sharpen, and more. You can also use GRFX Studio to create collages, frames, borders, and stickers with your photos.

                -

                Parallels Toolbox

                -

                Parallels Toolbox is a handy software that provides you with over 30 tools to optimize your PC performance and productivity. You can use Parallels Toolbox to clean your disk, free up memory, uninstall apps, download videos, record screen, take screenshots, and more.

                -

                PhotoMirage Express

                -

                PhotoMirage Express is a fun software that allows you to animate your photos with ease. You can use PhotoMirage Express to create photo animations by adding motion arrows and anchor points to your photos. You can also use PhotoMirage Express to adjust the speed, direction, and smoothness of your animations. You can also use PhotoMirage Express to export and share your animations as GIFs, videos, or social media posts.

                -

                Painter Essentials 6

                -

                Painter Essentials 6 is a beginner-friendly software that allows you to create digital paintings from your photos or blank canvas. You can use Painter Essentials 6 to choose from various painting styles such as oil, watercolor, impressionist, sketch, and more. You can also use Painter Essentials 6 to customize your brushes, colors, textures, and more. You can also use Painter Essentials 6 to learn from tutorials and tips from experts.

                -

                AfterShot 3

                -

                AfterShot 3 is a fast and easy software that allows you to edit and organize your RAW photos. You can use AfterShot 3 to batch process your RAW photos with tools such as crop, rotate, straighten, white balance, exposure, contrast, and more. You can also use AfterShot 3 to apply presets and filters to your RAW photos with one click. You can also use AfterShot 3 to manage your photo library with tools such as ratings, tags, collections, and more.

                -

                How to use Corel PaintShop Pro 2020 Ultimate keygen?

                -

                If you want to use Corel PaintShop Pro 2020 Ultimate for free with a keygen, you need to follow these simple steps:

                -

                Download and install the software

                -

                The first step is to download and install the software from the official website or any other trusted source. You can choose between the trial version or the full version. The trial version will expire after 30 days, while the full version will require a license key to activate.

                -

                Run the keygen and generate a serial number

                -

                The second step is to run the keygen that you can download from this link or any other reliable source. The keygen is a small program that will generate a serial number for you. You need to copy the serial number and paste it into the software when prompted.

                -

                Activate the software with the serial number and enjoy!

                -

                The final step is to activate the software with the serial number that you got from the keygen. You need to enter the serial number into the software and click on activate. The software will verify the serial number and unlock all the features and tools for you. You can now enjoy using Corel PaintShop Pro 2020 Ultimate for free!

                -

                Conclusion

                -

                Summary of the main points

                -

                In conclusion, Corel PaintShop Pro 2020 Ultimate is a powerful and versatile photo editing and graphic design software that offers a wide range of features and tools to help you create stunning images and designs. It is one of the best alternatives to Adobe Photoshop, offering a user-friendly interface and a rich set of features to suit any creative project. You can also get it for free with a keygen that will generate a serial number for you.

                -

                Call to action

                -

                If you are interested in trying out Corel PaintShop Pro 2020 Ultimate for yourself, you can download it from this link or any other trusted source. You can also download the keygen from this link or any other reliable source. Follow the steps above to install and activate the software with the keygen and enjoy using it for free!

                -

                If you liked this article, please share it with your friends and leave a comment below. Thank you for reading!

                -

                Frequently Asked Questions

                -
                  -
                1. Is Corel PaintShop Pro 2020 Ultimate safe to use?
                2. -

                  Yes, Corel PaintShop Pro 2020 Ultimate is safe to use as long as you download it from the official website or any other trusted source. However, you should be careful when downloading and using a keygen, as some keygens may contain viruses or malware that can harm your PC. You should always scan any file that you download with an antivirus program before opening it.

                  -
                3. Is Corel PaintShop Pro 2020 Ultimate compatible with Windows 10?
                4. -

                  Yes, Corel PaintShop Pro 2020 Ultimate is compatible with Windows 10 as well as Windows 8.1 and Windows 7 (64-bit editions only). It requires at least 4 GB of RAM, 1 GB of hard disk space, and an Intel or AMD processor with 64-bit support.

                  -
                5. Can I use Corel PaintShop Pro 2020 Ultimate on Mac?
                6. -

                  No, Corel PaintShop Pro 2020 Ultimate is not available for Mac users. However, you can use Parallels Desktop or Boot Camp to run Windows on your Mac and then install Corel PaintShop Pro 2020 Ultimate on it.

                  -
                7. Can I use Corel PaintShop Pro 2020 Ultimate offline?
                8. -

                  Yes, you can use Corel PaintShop Pro 2020 Ultimate offline once you have activated it with a serial number from a keygen. However, you may need an internet connection for some features such as online help, updates, or bonus software downloads.

                  -
                9. Can I update Corel PaintShop Pro 2020 Ultimate after using a keygen?
                10. -

                  No, you should not update Corel PaintShop Pro 2020 Ultimate after using a keygen because it may invalidate your serial number and deactivate your software. You should only update your software if you have purchased a valid license from Corel or an authorized reseller.

                  -
                -

                0a6ba089eb
                -
                -
                \ No newline at end of file diff --git a/spaces/rahul999r/Rahul_Kannada_TTS/tts_infer/example_inference.py b/spaces/rahul999r/Rahul_Kannada_TTS/tts_infer/example_inference.py deleted file mode 100644 index 676718fff3c6a7120cea91b0cfc95f8872929da7..0000000000000000000000000000000000000000 --- a/spaces/rahul999r/Rahul_Kannada_TTS/tts_infer/example_inference.py +++ /dev/null @@ -1,79 +0,0 @@ -''' Example file to test tts_infer after installing it. Refer to section 1.1 in README.md for steps of installation. ''' - -from tts_infer.tts import TextToMel, MelToWav -from tts_infer.transliterate import XlitEngine -from tts_infer.num_to_word_on_sent import normalize_nums - -import re -import numpy as np -from scipy.io.wavfile import write - -from mosestokenizer import * -from indicnlp.tokenize import sentence_tokenize - -INDIC = ["as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te"] - -def split_sentences(paragraph, language): - if language == "en": - with MosesSentenceSplitter(language) as splitter: - return splitter([paragraph]) - elif language in INDIC: - return sentence_tokenize.sentence_split(paragraph, lang=language) - - -device='cpu' -text_to_mel = TextToMel(glow_model_dir='/path/to/glow_ckp', device=device) -mel_to_wav = MelToWav(hifi_model_dir='/path/to/hifi_ckp', device=device) - -lang='hi' # transliteration from En to Hi -engine = XlitEngine(lang) # loading translit model globally - -def translit(text, lang): - reg = re.compile(r'[a-zA-Z]') - words = [engine.translit_word(word, topk=1)[lang][0] if reg.match(word) else word for word in text.split()] - updated_sent = ' '.join(words) - return updated_sent - -def run_tts(text, lang): - text = text.replace('।', '.') # only for hindi models - text_num_to_word = normalize_nums(text, lang) # converting numbers to words in lang - text_num_to_word_and_transliterated = translit(text_num_to_word, lang) # transliterating english words to lang - final_text = ' ' + text_num_to_word_and_transliterated - - mel = text_to_mel.generate_mel(final_text) - audio, sr = mel_to_wav.generate_wav(mel) - write(filename='temp.wav', rate=sr, data=audio) # for saving wav file, if needed - return (sr, audio) - -def run_tts_paragraph(text, lang): - audio_list = [] - split_sentences_list = split_sentences(text, language='hi') - - for sent in split_sentences_list: - sr, audio = run_tts(sent, lang) - audio_list.append(audio) - - concatenated_audio = np.concatenate([i for i in audio_list]) - write(filename='temp_long.wav', rate=sr, data=concatenated_audio) - return (sr, concatenated_audio) - -if __name__ == "__main__": - _, audio = run_tts('mera naam neeraj hai', 'hi') - - para = ''' - भारत मेरा देश है और मुझे भारतीय होने पर गर्व है। ये विश्व का सातवाँ सबसे बड़ा और विश्व में दूसरा सबसे अधिक जनसंख्या वाला देश है। - इसे भारत, हिन्दुस्तान और आर्यव्रत के नाम से भी जाना जाता है। ये एक प्रायद्वीप है जो पूरब में बंगाल की खाड़ी, - पश्चिम में अरेबियन सागर और दक्षिण में भारतीय महासागर जैसे तीन महासगरों से घिरा हुआ है। - भारत का राष्ट्रीय पशु चीता, राष्ट्रीय पक्षी मोर, राष्ट्रीय फूल कमल, और राष्ट्रीय फल आम है। - भारत मेरा देश है और मुझे भारतीय होने पर गर्व है। ये विश्व का सातवाँ सबसे बड़ा और विश्व में दूसरा सबसे अधिक जनसंख्या वाला देश है। - इसे भारत, हिन्दुस्तान और आर्यव्रत के नाम से भी जाना जाता है। ये एक प्रायद्वीप है जो पूरब में बंगाल की खाड़ी, - पश्चिम में अरेबियन सागर और दक्षिण में भारतीय महासागर जैसे तीन महासगरों से घिरा हुआ है। - भारत का राष्ट्रीय पशु चीता, राष्ट्रीय पक्षी मोर, राष्ट्रीय फूल कमल, और राष्ट्रीय फल आम है। - भारत मेरा देश है और मुझे भारतीय होने पर गर्व है। ये विश्व का सातवाँ सबसे बड़ा और विश्व में दूसरा सबसे अधिक जनसंख्या वाला देश है। - इसे भारत, हिन्दुस्तान और आर्यव्रत के नाम से भी जाना जाता है। ये एक प्रायद्वीप है जो पूरब में बंगाल की खाड़ी, - पश्चिम में अरेबियन सागर और दक्षिण में भारतीय महासागर जैसे तीन महासगरों से घिरा हुआ है। - भारत का राष्ट्रीय पशु चीता, राष्ट्रीय पक्षी मोर, राष्ट्रीय फूल कमल, और राष्ट्रीय फल आम है। - ''' - - print('Num chars in paragraph: ', len(para)) - _, audio_long = run_tts_paragraph(para, 'hi') diff --git a/spaces/rajeshradhakrishnan/english-malayalam/static/script.js b/spaces/rajeshradhakrishnan/english-malayalam/static/script.js deleted file mode 100644 index 9eb3b0c72b77af95f505be70503c8e42794f8470..0000000000000000000000000000000000000000 --- a/spaces/rajeshradhakrishnan/english-malayalam/static/script.js +++ /dev/null @@ -1,90 +0,0 @@ - -const translateText = async (text) => { - console.log(text) - const inferResponse = await fetch(`infer_t5?input=${text}`); - const inferJson = await inferResponse.json(); - console.log(inferJson.output) - return inferJson.output; - }; - - -function generatePrompterAssistantText(inputString) { - // Split the input string into an array of sentences - const sentences = inputString.split('<|endoftext|>'); - - // Initialize arrays for prompter and assistant text - let prompterText = []; - let assistantText = []; - - // Loop through each sentence and add it to the prompter or assistant text array - for (let i = 0; i < sentences.length; i++) { - // Check if the sentence contains the tag - if (sentences[i].includes('<|prompter|>')) { - // Extract the text within the tags and add it to the prompter text array - const prompterSentence = sentences[i].replace(/<\|prompter\|>/g, ''); - prompterText.push(prompterSentence); - } else if (sentences[i].includes('<|assistant|>')) { - const assistantSentence = sentences[i].replace(/<\|assistant\|>/g, ''); - // Add the sentence to the assistant text array - assistantText.push(assistantSentence); - } - } - - // Return the prompter and assistant text arrays - return [prompterText, assistantText]; - } - -const submitButton = document.querySelector('#submit') -const outPutElement = document.querySelector('#output') -const inputElement = document.querySelector('input') -const historyElement = document.querySelector('.history') -const buttonElement = document.querySelector('button') - - -function changeInput(value) -{ - console.log(value) - const inputElement = document.querySelector('input') - inputElement.value = value -} -async function getMessage(){ - //console.log("input value "+ inputElement.value) - const options = { - method: "POST", - headers: { - Authorization: `Bearer ${API_KEY}`, - "Content-Type": "application/json" - }, - body: JSON.stringify({ - inputs: "<|prompter|>" + inputElement.value + "<|endoftext|><|assistant|>", - parameters: {"max_new_tokens": 200, "temperature": 0.9} - }) - } - try{ - const response = await fetch("https://api-inference.huggingface.co/models/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5", options); - const data = await response.json() - //console.log(data[0].generated_text) - - if(inputElement.value && data && data[0].generated_text){ - const [prompterText, assistantText] = generatePrompterAssistantText(data[0].generated_text); - // const en_text_ml = "English: " + assistantText[0] + " Malayalam:"; - // console.log(en_text_ml) - //console.log(prompterText) - //console.log(assistantText) - outPutElement.textContent = await translateText(assistantText[0]); - const pElement = document.createElement('p') - pElement.textContent = inputElement.value - pElement.addEventListener('click', () => changeInput(pElement.textContent)) - historyElement.append(pElement) - } - } catch(error) { - console.log(error) - } -} - -submitButton.addEventListener('click', getMessage) - -function clearInput(){ - inputElement.value = '' -} -buttonElement.addEventListener('click', clearInput) \ No newline at end of file diff --git a/spaces/rajistics/h2o_wave_transformers/Dockerfile b/spaces/rajistics/h2o_wave_transformers/Dockerfile deleted file mode 100644 index d9a2be1c94a67457831a1ca78ca9ba43e5b8289a..0000000000000000000000000000000000000000 --- a/spaces/rajistics/h2o_wave_transformers/Dockerfile +++ /dev/null @@ -1,31 +0,0 @@ -# read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker -# you will also find guides on how best to write your Dockerfile - -FROM python:3.9 - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt - -RUN apt update && apt install -y ffmpeg -RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - -RUN useradd -m -u 1000 user - -USER user - -ENV HOME=/home/user -ENV PATH=/home/user/.local/bin:$PATH - -WORKDIR $HOME/app - -COPY --chown=user . $HOME/app - -ENV H2O_WAVE_LISTEN=":7860" -ENV H2O_WAVE_ADDRESS='http://127.0.0.1:7860' -ENV H2O_WAVE_DATA_DIR='/home/user/app/data' - -RUN mkdir -p $HOME/app/data - - -CMD ["wave", "run", "app", "--no-reload"] \ No newline at end of file diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/querystring.d.ts b/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/querystring.d.ts deleted file mode 100644 index e1185478461f4b15444b7b2ae114c8a6819a992a..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/querystring.d.ts +++ /dev/null @@ -1,131 +0,0 @@ -/** - * The `querystring` module provides utilities for parsing and formatting URL - * query strings. It can be accessed using: - * - * ```js - * const querystring = require('querystring'); - * ``` - * - * `querystring` is more performant than `URLSearchParams` but is not a - * standardized API. Use `URLSearchParams` when performance is not critical - * or when compatibility with browser code is desirable. - * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/querystring.js) - */ -declare module 'querystring' { - interface StringifyOptions { - encodeURIComponent?: ((str: string) => string) | undefined; - } - interface ParseOptions { - maxKeys?: number | undefined; - decodeURIComponent?: ((str: string) => string) | undefined; - } - interface ParsedUrlQuery extends NodeJS.Dict {} - interface ParsedUrlQueryInput extends NodeJS.Dict | ReadonlyArray | ReadonlyArray | null> {} - /** - * The `querystring.stringify()` method produces a URL query string from a - * given `obj` by iterating through the object's "own properties". - * - * It serializes the following types of values passed in `obj`:[string](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#String_type) | - * [number](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Number_type) | - * [bigint](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt) | - * [boolean](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Boolean_type) | - * [string\[\]](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#String_type) | - * [number\[\]](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Number_type) | - * [bigint\[\]](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt) | - * [boolean\[\]](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Boolean_type) The numeric values must be finite. Any other input values will be coerced to - * empty strings. - * - * ```js - * querystring.stringify({ foo: 'bar', baz: ['qux', 'quux'], corge: '' }); - * // Returns 'foo=bar&baz=qux&baz=quux&corge=' - * - * querystring.stringify({ foo: 'bar', baz: 'qux' }, ';', ':'); - * // Returns 'foo:bar;baz:qux' - * ``` - * - * By default, characters requiring percent-encoding within the query string will - * be encoded as UTF-8\. If an alternative encoding is required, then an alternative`encodeURIComponent` option will need to be specified: - * - * ```js - * // Assuming gbkEncodeURIComponent function already exists, - * - * querystring.stringify({ w: '中文', foo: 'bar' }, null, null, - * { encodeURIComponent: gbkEncodeURIComponent }); - * ``` - * @since v0.1.25 - * @param obj The object to serialize into a URL query string - * @param [sep='&'] The substring used to delimit key and value pairs in the query string. - * @param [eq='='] . The substring used to delimit keys and values in the query string. - */ - function stringify(obj?: ParsedUrlQueryInput, sep?: string, eq?: string, options?: StringifyOptions): string; - /** - * The `querystring.parse()` method parses a URL query string (`str`) into a - * collection of key and value pairs. - * - * For example, the query string `'foo=bar&abc=xyz&abc=123'` is parsed into: - * - * ```js - * { - * foo: 'bar', - * abc: ['xyz', '123'] - * } - * ``` - * - * The object returned by the `querystring.parse()` method _does not_prototypically inherit from the JavaScript `Object`. This means that typical`Object` methods such as `obj.toString()`, - * `obj.hasOwnProperty()`, and others - * are not defined and _will not work_. - * - * By default, percent-encoded characters within the query string will be assumed - * to use UTF-8 encoding. If an alternative character encoding is used, then an - * alternative `decodeURIComponent` option will need to be specified: - * - * ```js - * // Assuming gbkDecodeURIComponent function already exists... - * - * querystring.parse('w=%D6%D0%CE%C4&foo=bar', null, null, - * { decodeURIComponent: gbkDecodeURIComponent }); - * ``` - * @since v0.1.25 - * @param str The URL query string to parse - * @param [sep='&'] The substring used to delimit key and value pairs in the query string. - * @param [eq='='] . The substring used to delimit keys and values in the query string. - */ - function parse(str: string, sep?: string, eq?: string, options?: ParseOptions): ParsedUrlQuery; - /** - * The querystring.encode() function is an alias for querystring.stringify(). - */ - const encode: typeof stringify; - /** - * The querystring.decode() function is an alias for querystring.parse(). - */ - const decode: typeof parse; - /** - * The `querystring.escape()` method performs URL percent-encoding on the given`str` in a manner that is optimized for the specific requirements of URL - * query strings. - * - * The `querystring.escape()` method is used by `querystring.stringify()` and is - * generally not expected to be used directly. It is exported primarily to allow - * application code to provide a replacement percent-encoding implementation if - * necessary by assigning `querystring.escape` to an alternative function. - * @since v0.1.25 - */ - function escape(str: string): string; - /** - * The `querystring.unescape()` method performs decoding of URL percent-encoded - * characters on the given `str`. - * - * The `querystring.unescape()` method is used by `querystring.parse()` and is - * generally not expected to be used directly. It is exported primarily to allow - * application code to provide a replacement decoding implementation if - * necessary by assigning `querystring.unescape` to an alternative function. - * - * By default, the `querystring.unescape()` method will attempt to use the - * JavaScript built-in `decodeURIComponent()` method to decode. If that fails, - * a safer equivalent that does not throw on malformed URLs will be used. - * @since v0.1.25 - */ - function unescape(str: string): string; -} -declare module 'node:querystring' { - export * from 'querystring'; -} diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/French Christmas Celebration Part 2 ((INSTALL)).md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/French Christmas Celebration Part 2 ((INSTALL)).md deleted file mode 100644 index 0d08fc7e4a2dc67d472df5c0b06b018cccf52225..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/French Christmas Celebration Part 2 ((INSTALL)).md +++ /dev/null @@ -1,10 +0,0 @@ -
                -

                christmas is a very french festival, so you will find that there are lots of things you can do in the run up to christmas day. there are many opportunities to celebrate in france during the holiday season. lets look at some of the main ones.

                -

                cant find a christmas market in paris? no worries! french christmas markets are also available in strasbourg and besancon. not only do you get to visit a holiday market, but you get to see a christmas market and enjoy christmas foods.

                -

                french christmas celebration part 2


                Download >>>>> https://urlgoal.com/2uCJZV



                -

                christmas is a wonderful holiday. the food is wonderful, the sights and activities are great and the french christmas is so much fun. if you havent celebrated a traditional christmas yet, the french christmas is a great place to start. lets take a look at the essentials for planning a traditional french christmas.

                -

                to start, i think it is important to mention that not all people who celebrate christmas here are practicing christians. there are many different ways to celebrate christmas in france, and i encourage you to try a few of them, and find out what works best for you. so, what do you do to celebrate the french christmas?

                -

                as i mentioned above, not all people who celebrate christmas here are practicing christians. if youre open to other ways of celebrating, youll find there are many non-religious options that are available in france. i recommend the following for a non-religious celebration of christmas in france.

                -

                to start, i think its important to mention that not all people who celebrate christmas here are practicing christians. there are many different ways to celebrate christmas in france, and i encourage you to try a few of them, and find out what works best for you. so, what do you do to celebrate the french christmas?

                899543212b
                -
                -
                \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Graphitech Cimagraphi V8 13 MULTILINGUALLz0.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Graphitech Cimagraphi V8 13 MULTILINGUALLz0.md deleted file mode 100644 index 2f060e881c72b514a59cf52e7f829da9d95801e8..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Graphitech Cimagraphi V8 13 MULTILINGUALLz0.md +++ /dev/null @@ -1,8 +0,0 @@ -

                Graphitech Cimagraphi V8 13 MULTILINGUALLz0


                DOWNLOAD ★★★★★ https://urlgoal.com/2uCJtr



                -
                -Graphitech Cimagraphi V8 13 Multilingual Lz0 DOWNLOAD: 372a6038bc Related Links: David E Simon An Embedded Software Primer Pdf Free. Graphitech Cimagraphi V8 13 Multilingual Lz0. -Graphitech Cimagraphi V8 13 Multilingual Lz0 DOWNLOAD: 372a6038bc Related Links: David E Simon An Embedded Software Primer Pdf Free Download. -Graphitech Cimagraphi V8 13 Multilingual Lz0 DOWNLOAD: 372a6038bc Related Links: David E Simon An Embedded Software Primer Pdf Free Download. 8a78ff9644
                -
                -
                -

                diff --git a/spaces/rehanuddin/03StreamlitVideoASRNLP/app.py b/spaces/rehanuddin/03StreamlitVideoASRNLP/app.py deleted file mode 100644 index e0f03cf2557eba112bf95ebf5eb582da8d8a0fe3..0000000000000000000000000000000000000000 --- a/spaces/rehanuddin/03StreamlitVideoASRNLP/app.py +++ /dev/null @@ -1,119 +0,0 @@ -from collections import deque -import streamlit as st -import torch -from streamlit_player import st_player -from transformers import AutoModelForCTC, Wav2Vec2Processor -from streaming import ffmpeg_stream - -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') -player_options = { - "events": ["onProgress"], - "progress_interval": 200, - "volume": 1.0, - "playing": True, - "loop": False, - "controls": False, - "muted": False, - "config": {"youtube": {"playerVars": {"start": 1}}}, -} - -# disable rapid fading in and out on `st.code` updates -st.markdown("", unsafe_allow_html=True) - -@st.cache(hash_funcs={torch.nn.parameter.Parameter: lambda _: None}) -def load_model(model_path="facebook/wav2vec2-large-robust-ft-swbd-300h"): - processor = Wav2Vec2Processor.from_pretrained(model_path) - model = AutoModelForCTC.from_pretrained(model_path).to(device) - return processor, model - -processor, model = load_model() - -def stream_text(url, chunk_duration_ms, pad_duration_ms): - sampling_rate = processor.feature_extractor.sampling_rate - - # calculate the length of logits to cut from the sides of the output to account for input padding - output_pad_len = model._get_feat_extract_output_lengths(int(sampling_rate * pad_duration_ms / 1000)) - - # define the audio chunk generator - stream = ffmpeg_stream(url, sampling_rate, chunk_duration_ms=chunk_duration_ms, pad_duration_ms=pad_duration_ms) - - leftover_text = "" - for i, chunk in enumerate(stream): - input_values = processor(chunk, sampling_rate=sampling_rate, return_tensors="pt").input_values - - with torch.no_grad(): - logits = model(input_values.to(device)).logits[0] - if i > 0: - logits = logits[output_pad_len : len(logits) - output_pad_len] - else: # don't count padding at the start of the clip - logits = logits[: len(logits) - output_pad_len] - - predicted_ids = torch.argmax(logits, dim=-1).cpu().tolist() - if processor.decode(predicted_ids).strip(): - leftover_ids = processor.tokenizer.encode(leftover_text) - # concat the last word (or its part) from the last frame with the current text - text = processor.decode(leftover_ids + predicted_ids) - # don't return the last word in case it's just partially recognized - text, leftover_text = text.rsplit(" ", 1) - yield text - else: - yield leftover_text - leftover_text = "" - yield leftover_text - -def main(): - state = st.session_state - st.header("Video ASR Streamlit from Youtube Link") - - with st.form(key="inputs_form"): - - # Our worlds best teachers on subjects of AI, Cognitive, Neuroscience for our Behavioral and Medical Health - ytJoschaBach="https://youtu.be/cC1HszE5Hcw?list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&t=8984" - ytSamHarris="https://www.youtube.com/watch?v=4dC_nRYIDZU&list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&index=2" - ytJohnAbramson="https://www.youtube.com/watch?v=arrokG3wCdE&list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&index=3" - ytElonMusk="https://www.youtube.com/watch?v=DxREm3s1scA&list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&index=4" - ytJeffreyShainline="https://www.youtube.com/watch?v=EwueqdgIvq4&list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&index=5" - ytJeffHawkins="https://www.youtube.com/watch?v=Z1KwkpTUbkg&list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&index=6" - ytSamHarris="https://youtu.be/Ui38ZzTymDY?list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L" - ytSamHarris="https://youtu.be/4dC_nRYIDZU?list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&t=7809" - ytSamHarris="https://youtu.be/4dC_nRYIDZU?list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&t=7809" - ytSamHarris="https://youtu.be/4dC_nRYIDZU?list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&t=7809" - ytTimelapseAI="https://www.youtube.com/watch?v=63yr9dlI0cU&list=PLHgX2IExbFovQybyfltywXnqZi5YvaSS-" - state.youtube_url = st.text_input("YouTube URL", ytTimelapseAI) - - - state.chunk_duration_ms = st.slider("Audio chunk duration (ms)", 2000, 10000, 3000, 100) - state.pad_duration_ms = st.slider("Padding duration (ms)", 100, 5000, 1000, 100) - submit_button = st.form_submit_button(label="Submit") - - if submit_button or "asr_stream" not in state: - # a hack to update the video player on value changes - state.youtube_url = ( - state.youtube_url.split("&hash=")[0] - + f"&hash={state.chunk_duration_ms}-{state.pad_duration_ms}" - ) - state.asr_stream = stream_text( - state.youtube_url, state.chunk_duration_ms, state.pad_duration_ms - ) - state.chunks_taken = 0 - - - state.lines = deque([], maxlen=100) # limit to the last n lines of subs - - - player = st_player(state.youtube_url, **player_options, key="youtube_player") - - if "asr_stream" in state and player.data and player.data["played"] < 1.0: - # check how many seconds were played, and if more than processed - write the next text chunk - processed_seconds = state.chunks_taken * (state.chunk_duration_ms / 1000) - if processed_seconds < player.data["playedSeconds"]: - text = next(state.asr_stream) - state.lines.append(text) - state.chunks_taken += 1 - if "lines" in state: - # print the lines of subs - st.code("\n".join(state.lines)) - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/removebg/removebg/app.py b/spaces/removebg/removebg/app.py deleted file mode 100644 index 43e92226f80fc27e54b06ace74d319b50912d1ba..0000000000000000000000000000000000000000 --- a/spaces/removebg/removebg/app.py +++ /dev/null @@ -1,18 +0,0 @@ -import gradio as gr -from rembg import remove, new_session - -def remove_background(input_image, model): - session = new_session(model) - output = remove(input_image, session=session) - return output - -iface = gr.Interface( - fn=remove_background, - inputs=["image", gr.inputs.Radio(["u2net", "u2netp", "u2net_human_seg", "u2net_cloth_seg", "silueta", "isnet-general-use", "isnet-anime"], label="Model")], - outputs="image", - title="Image Background Remover", - description="This is a gradio wrapper for the rembg library. It can be used to remove the background from an image. You can choose from different models to get the best results." -) - -if __name__ == '__main__': - iface.launch() diff --git a/spaces/rgres/Seg2Sat/frontend/.svelte-kit/output/client/_app/immutable/pages/index.svelte-ce916c65.js b/spaces/rgres/Seg2Sat/frontend/.svelte-kit/output/client/_app/immutable/pages/index.svelte-ce916c65.js deleted file mode 100644 index 92594e5c42292f8c6c826f7d5be0624412aaaa6c..0000000000000000000000000000000000000000 --- a/spaces/rgres/Seg2Sat/frontend/.svelte-kit/output/client/_app/immutable/pages/index.svelte-ce916c65.js +++ /dev/null @@ -1 +0,0 @@ -import{S as se,i as ne,s as ie,e as C,k as A,c as T,a as z,m as D,d as I,b as s,K as Oe,g as Z,J as p,t as V,h as F,L as U,E as K,M as Ne,N as Y,O as X,P as me,Q as ge,j as Le,f as $e,R as oe,T as Se,U as Ue,V as ct,W as Be,w as ue,x as pe,y as de,q as he,o as fe,B as ve,v as ut}from"../chunks/index-bcf2726a.js";import{w as ae,b as at}from"../chunks/paths-d3bcbd10.js";const Pe=[{color:[219,14,154],label:"building"},{color:[147,142,123],label:"pervious surface"},{color:[248,12,0],label:"impervious surface"},{color:[169,113,1],label:"bare soil"},{color:[21,83,174],label:"water"},{color:[25,74,38],label:"coniferous"},{color:[70,228,131],label:"deciduous"},{color:[243,166,13],label:"brushwood"},{color:[102,0,130],label:"vineyard"},{color:[85,255,0],label:"herbaceous vegetation"},{color:[255,243,13],label:"agricultural land"},{color:[228,223,124],label:"plowed land"},{color:[61,230,235],label:"swimming pool"},{color:[255,255,255],label:"snow"},{color:[138,179,160],label:"clear cut"},{color:[107,113,79],label:"mixed"}],Re=["/samples/default.jpg","/samples/example0.png","/samples/example1.png","/samples/example2.png","/samples/example3.png","/samples/example4.png","/samples/example5.png","/samples/example6.jpg"],ke=[["","None"],["Watercolors","Watercolors"],["Colorful lego bricks","Lego brick"],["Black and white paper pencil drawing","Pencil"],["Oil on canvas painting","Painting"]];function st(){return BigInt(0xb7dd73e137d20800&((1<<63)-1)*Math.random())}const _e=ae(new Map),je=ae(),Ae=ae(),De=ae(),xe=ae(),Ie=ae({prompt:"Aerial view of rue des Lilas, Toulouse, Haute-Garonne, France",modifier:ke[0][0],seed:st(),steps:20}),be=ae(!1),Me=ae(!1);function Ye(l,e,t){const r=l.slice();return r[3]=e[t],r[5]=t,r}function Je(l){let e,t,r,a,o,n,d,i,k,E,b,P;return{c(){e=C("div"),t=C("input"),n=A(),d=C("label"),i=C("img"),P=A(),this.h()},l(g){e=T(g,"DIV",{class:!0});var v=z(e);t=T(v,"INPUT",{type:!0,name:!0,id:!0,class:!0}),n=D(v),d=T(v,"LABEL",{for:!0,class:!0});var x=z(d);i=T(x,"IMG",{src:!0,alt:!0,class:!0}),x.forEach(I),P=D(v),v.forEach(I),this.h()},h(){s(t,"type","radio"),s(t,"name","samples"),s(t,"id",r="sample-"+l[5]),t.value=a=l[5],t.disabled=o=l[0]===!0,s(t,"class","svelte-1gwcbp"),Oe(i.src,k=at+l[3])||s(i,"src",k),s(i,"alt",E=l[3]),s(i,"class","svelte-1gwcbp"),s(d,"for",b="sample-"+l[5]),s(d,"class","svelte-1gwcbp"),s(e,"class","snap-always snap-start")},m(g,v){Z(g,e,v),p(e,t),p(e,n),p(e,d),p(d,i),p(e,P)},p(g,v){v&1&&o!==(o=g[0]===!0)&&(t.disabled=o)},d(g){g&&I(e)}}}function pt(l){let e,t,r,a,o,n,d,i,k=Re,E=[];for(let b=0;b{const r=new Image;r.onload=()=>{URL.revokeObjectURL(r.src),e(r)},r.onerror=a=>{t(a)},r.src=URL.createObjectURL(l)})}function ht(l,e,t){let r,a;return Y(l,De,n=>t(2,r=n)),Y(l,be,n=>t(0,a=n)),[a,async n=>{n.preventDefault();const d=Re[parseInt(n.target.value)];if(d){const i=await fetch(at+d).then(E=>E.blob()),k=await dt(i);X(De,r=k,r)}}]}class ft extends se{constructor(e){super(),ne(this,e,ht,pt,ie,{})}}function We(l,e,t){const r=l.slice();return r[2]=e[t],r[7]=t,r}function Xe(l){let e,t,r,a,o,n,d,i,k,E,b,P,g=l[2].label+"",v,x,u;return{c(){e=C("div"),t=C("input"),n=A(),d=C("label"),i=me("svg"),k=me("rect"),b=A(),P=C("span"),v=V(g),u=A(),this.h()},l(f){e=T(f,"DIV",{class:!0});var h=z(e);t=T(h,"INPUT",{name:!0,type:!0,id:!0,class:!0}),n=D(h),d=T(h,"LABEL",{for:!0,class:!0});var c=z(d);i=ge(c,"svg",{width:!0,height:!0,viewBox:!0,class:!0});var m=z(i);k=ge(m,"rect",{x:!0,y:!0,width:!0,height:!0,fill:!0}),z(k).forEach(I),m.forEach(I),b=D(c),P=T(c,"SPAN",{class:!0});var w=z(P);v=F(w,g),w.forEach(I),c.forEach(I),u=D(h),h.forEach(I),this.h()},h(){s(t,"name","color"),t.checked=r=l[7]==nt,s(t,"type","radio"),s(t,"id",a="color-"+l[7]),t.value=o=l[7],s(t,"class","svelte-1oy4poo"),s(k,"x","0"),s(k,"y","0"),s(k,"width","20"),s(k,"height","20"),s(k,"fill",E="rgb("+l[2].color.join(",")+")"),s(i,"width","20"),s(i,"height","20"),s(i,"viewBox","0 0 20 20"),s(i,"class","svelte-1oy4poo"),s(P,"class","svelte-1oy4poo"),s(d,"for",x="color-"+l[7]),s(d,"class","svelte-1oy4poo"),s(e,"class","snap-always snap-start")},m(f,h){Z(f,e,h),p(e,t),p(e,n),p(e,d),p(d,i),p(i,k),p(d,b),p(d,P),p(P,v),p(e,u)},p:K,d(f){f&&I(e)}}}function vt(l){let e,t,r,a,o,n,d,i,k,E,b,P,g,v=l[0].size+"",x,u,f,h=Pe,c=[];for(let m=0;mt(0,r=k));const{color:a,label:o}=Pe[nt];let n=`rgb(${a.join(",")})`,d=40;return X(xe,r={color:n,size:d,label:o},r),[r,async k=>{const E=k.target;if(E.name==="color"){const b=parseInt(E.value),{color:P,label:g}=Pe[b];n=`rgb(${P.join(",")})`,X(xe,r={color:n,size:d,label:g},r)}else E.name==="brush"&&(d=parseInt(E.value),X(xe,r={color:n,size:d,label:o},r))},a]}class gt extends se{constructor(e){super(),ne(this,e,mt,vt,ie,{})}}function Ze(l,e,t){const r=l.slice();return r[15]=e[t],r}function Ke(l){let e,t=l[15][1]+"",r,a,o;return{c(){e=C("option"),r=V(t),o=V("`"),this.h()},l(n){e=T(n,"OPTION",{});var d=z(e);r=F(d,t),d.forEach(I),o=F(n,"`"),this.h()},h(){e.__value=a=l[15][0],e.value=e.__value},m(n,d){Z(n,e,d),p(e,r),Z(n,o,d)},p:K,d(n){n&&I(e),n&&I(o)}}}function bt(l){let e,t,r,a,o,n,d,i,k,E,b,P,g,v,x,u,f,h,c,m,w,y,M,_,S,N,R,q,$,J,te,W,O,L,re,ee,le,ce,ye,Q=ke,G=[];for(let j=0;jt(5,r=h)),Y(l,be,h=>t(6,a=h));function o(){const h=n.elements;X(Ie,r={prompt:h.prompt.value,modifier:h.modifier.value,seed:BigInt(h.seed.value),steps:parseInt(h.steps.value)},r)}let n,d=r.seed,i=r.steps,k=r.prompt,E=r.modifier;function b(){k=this.value,t(3,k)}function P(){E=this.value,t(4,E)}const g=h=>{const c=h.currentTarget.selectedIndex-1;t(4,E=ke[c][0]),X(Ie,r.modifier=ke[c][0],r)};function v(){d=this.value,t(1,d)}const x=()=>{t(1,d=st()),o()};function u(){i=ct(this.value),t(2,i)}function f(h){Be[h?"unshift":"push"](()=>{n=h,t(0,n)})}return[n,d,i,k,E,r,a,o,b,P,g,v,x,u,f]}class wt extends se{constructor(e){super(),ne(this,e,yt,bt,ie,{})}}let _t=(l=21)=>crypto.getRandomValues(new Uint8Array(l)).reduce((e,t)=>(t&=63,t<36?e+=t.toString(36):t<62?e+=(t-26).toString(36).toUpperCase():t>62?e+="-":e+="_",e),"");var xt=typeof globalThis!="undefined"?globalThis:typeof window!="undefined"?window:typeof global!="undefined"?global:typeof self!="undefined"?self:{};function kt(l){return l&&l.__esModule&&Object.prototype.hasOwnProperty.call(l,"default")?l.default:l}var it={exports:{}};(function(l,e){(function(t,r){l.exports=r()})(typeof self!="undefined"?self:xt,function(){return function(t){var r={};function a(o){if(r[o])return r[o].exports;var n=r[o]={i:o,l:!1,exports:{}};return t[o].call(n.exports,n,n.exports,a),n.l=!0,n.exports}return a.m=t,a.c=r,a.d=function(o,n,d){a.o(o,n)||Object.defineProperty(o,n,{enumerable:!0,get:d})},a.r=function(o){typeof Symbol!="undefined"&&Symbol.toStringTag&&Object.defineProperty(o,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(o,"__esModule",{value:!0})},a.t=function(o,n){if(1&n&&(o=a(o)),8&n||4&n&&typeof o=="object"&&o&&o.__esModule)return o;var d=Object.create(null);if(a.r(d),Object.defineProperty(d,"default",{enumerable:!0,value:o}),2&n&&typeof o!="string")for(var i in o)a.d(d,i,function(k){return o[k]}.bind(null,i));return d},a.n=function(o){var n=o&&o.__esModule?function(){return o.default}:function(){return o};return a.d(n,"a",n),n},a.o=function(o,n){return Object.prototype.hasOwnProperty.call(o,n)},a.p="",a(a.s=0)}([function(t,r,a){function o(g,v){return function(x){if(Array.isArray(x))return x}(g)||function(x,u){if(Symbol.iterator in Object(x)||Object.prototype.toString.call(x)==="[object Arguments]"){var f=[],h=!0,c=!1,m=void 0;try{for(var w,y=x[Symbol.iterator]();!(h=(w=y.next()).done)&&(f.push(w.value),!u||f.length!==u);h=!0);}catch(M){c=!0,m=M}finally{try{h||y.return==null||y.return()}finally{if(c)throw m}}return f}}(g,v)||function(){throw new TypeError("Invalid attempt to destructure non-iterable instance")}()}function n(g){return function(v){if(Array.isArray(v)){for(var x=0,u=new Array(v.length);x255?255:w,g:y=y>255?255:y,b:M=M>255?255:M}}}},{key:"make",value:function(u){var f=u.size,h=u.color;try{f*=window.devicePixelRatio;var c=this.parseColor(h),m=JSON.stringify(c);if(this.canvases[m]=this.canvases[m]||{},this.canvases[m][f]!=null)return this.canvases[m][f];var w=document.createElement("canvas");f+=f%2,w.width=f,w.height=f;for(var y=w.getContext("2d"),M=y.createImageData(f,f),_=0;_y||_>M)&&(_+=2*++y+1)}while(y<0)}},{key:"fillCircle",value:function(u,f){for(var h=4*u.width,c=1;c{"classNames"in a&&t(0,r=a.classNames)},[r]}class St extends se{constructor(e){super(),ne(this,e,It,Et,ie,{classNames:0})}}function Pt(l){var f;let e,t,r,a,o,n,d,i=((f=l[0])==null?void 0:f.label)+"",k,E,b,P,g,v,x,u;return P=new St({}),{c(){e=C("div"),t=C("div"),r=C("canvas"),a=A(),o=C("canvas"),n=A(),d=C("span"),k=V(i),E=A(),b=C("button"),ue(P.$$.fragment),this.h()},l(h){e=T(h,"DIV",{});var c=z(e);t=T(c,"DIV",{class:!0});var m=z(t);r=T(m,"CANVAS",{class:!0,width:!0,height:!0}),z(r).forEach(I),a=D(m),o=T(m,"CANVAS",{class:!0,width:!0,height:!0}),z(o).forEach(I),n=D(m),d=T(m,"SPAN",{class:!0});var w=z(d);k=F(w,i),w.forEach(I),E=D(m),b=T(m,"BUTTON",{class:!0});var y=z(b);pe(P.$$.fragment,y),y.forEach(I),m.forEach(I),c.forEach(I),this.h()},h(){s(r,"class","canvas svelte-vhujxn"),s(r,"width","512"),s(r,"height","512"),s(o,"class","brush svelte-vhujxn"),s(o,"width","10"),s(o,"height","10"),s(d,"class","label svelte-vhujxn"),s(b,"class","absolute bottom-0 left-0 p-3"),b.disabled=g=l[3].size<=0,s(t,"class","relative overflow-clip")},m(h,c){Z(h,e,c),p(e,t),p(t,r),l[11](r),p(t,a),p(t,o),l[12](o),p(t,n),p(t,d),p(d,k),p(t,E),p(t,b),de(P,b,null),v=!0,x||(u=[U(r,"touchmove",Ct),U(r,"pointerenter",Mt),U(r,"pointerup",l[4]),U(r,"pointerleave",l[4]),U(r,"pointercancel",l[4]),U(r,"pointerout",l[4]),U(r,"pointermove",l[6]),U(r,"pointerdown",l[5]),U(b,"click",Se(l[13]))],x=!0)},p(h,[c]){var m;(!v||c&1)&&i!==(i=((m=h[0])==null?void 0:m.label)+"")&&Le(k,i),(!v||c&8&&g!==(g=h[3].size<=0))&&(b.disabled=g)},i(h){v||(he(P.$$.fragment,h),v=!0)},o(h){fe(P.$$.fragment,h),v=!1},d(h){h&&I(e),l[11](null),l[12](null),ve(P),x=!1,Ue(u)}}}function Mt(){}function et(l,e){const t=l.getBoundingClientRect();return{x:(e.clientX-t.left)*(l.width/t.width),y:(e.clientY-t.top)*(l.height/t.height)}}function tt(l){l.fillStyle="#46e483",l.fillRect(0,0,l.canvas.width,l.canvas.height)}function ze(l,e){l.drawImage(e,0,0,l.canvas.width,l.canvas.height)}const Ct=l=>l.preventDefault();function Tt(l,e,t){let r,a,o,n;Y(l,_e,_=>t(3,r=_)),Y(l,De,_=>t(10,a=_)),Y(l,xe,_=>t(0,o=_)),Y(l,Ae,_=>t(18,n=_));let d,i,k,E,b={x:0,y:0},P;ut(()=>{t(9,E=d.getContext("2d")),t(8,k=i.getContext("2d")),window.devicePixelRatio=1,P=new Qe(d),t(1,d.style.height="unset",d),t(1,d.style.width="unset",d),X(Ae,n=d,n),tt(E)});let g=!1,v;function x(){t(2,i.style.top=`${10+o.size/2}px`,i),t(2,i.style.left=`${10+o.size/2}px`,i),g=!1}function u(_){g=!0,b=et(d,_),P.draw({from:b,to:b,size:o.size,color:o.color}),v=_t(),_e.update(S=>(S.set(v,{brush:o,points:[{from:b,to:b}]}),S))}function f(_){const S=et(d,_);t(2,i.style.top=`${_.offsetY}px`,i),t(2,i.style.left=`${_.offsetX}px`,i),g&&(P.draw({from:b,to:S,size:o.size,color:o.color}),_e.update(N=>{const R=N.get(v);return R==null||R.points.push({from:b,to:S}),N}),b=S)}function h(_){const{size:S,color:N}=_;t(2,i.width=S,i),t(2,i.height=S,i),t(8,k.fillStyle=N,k),k.arc(S/2,S/2,S/2,0,2*Math.PI),k.fill()}function c(){if(r.size<=0)return;const _=Array.from(r.keys());_e.update(S=>(S.delete(_[_.length-1]),S)),m(E)}function m(_){const S=document.createElement("canvas");S.width=512,S.height=512,window.devicePixelRatio=1;const N=new Qe(S);tt(_),a&&ze(_,a),Array.from(r.values()).forEach(R=>{R.points.forEach((q,$)=>{N.draw({from:q.from,to:q.to,size:R.brush.size,color:R.brush.color})})}),requestAnimationFrame(()=>{ze(_,S)})}function w(_){Be[_?"unshift":"push"](()=>{d=_,t(1,d)})}function y(_){Be[_?"unshift":"push"](()=>{i=_,t(2,i),t(8,k),t(0,o)})}const M=()=>c();return l.$$.update=()=>{l.$$.dirty&257&&k&&o&&(h(o),t(2,i.style.top=`${10+o.size/2}px`,i),t(2,i.style.left=`${10+o.size/2}px`,i)),l.$$.dirty&1536&&a&&(ze(E,a),X(_e,r=new Map,r))},[o,d,i,r,x,u,f,c,k,E,a,w,y,M]}class zt extends se{constructor(e){super(),ne(this,e,Tt,Pt,ie,{})}}function rt(l){let e,t,r;return{c(){e=C("img"),this.h()},l(a){e=T(a,"IMG",{class:!0,alt:!0,src:!0,width:!0,height:!0}),this.h()},h(){s(e,"class",t="image "+(l[1]?"opacity-30":"")+" svelte-1t0h0rs"),s(e,"alt","Generative Map Result"),Oe(e.src,r=l[0])||s(e,"src",r),s(e,"width","512"),s(e,"height","512")},m(a,o){Z(a,e,o)},p(a,o){o&2&&t!==(t="image "+(a[1]?"opacity-30":"")+" svelte-1t0h0rs")&&s(e,"class",t),o&1&&!Oe(e.src,r=a[0])&&s(e,"src",r)},d(a){a&&I(e)}}}function lt(l){let e,t,r,a,o,n;return{c(){e=C("div"),t=me("svg"),r=me("path"),a=A(),o=C("span"),n=V(ot),this.h()},l(d){e=T(d,"DIV",{class:!0});var i=z(e);t=ge(i,"svg",{xmlns:!0,fill:!0,viewBox:!0,class:!0});var k=z(t);r=ge(k,"path",{fill:!0,d:!0}),z(r).forEach(I),k.forEach(I),a=D(i),o=T(i,"SPAN",{class:!0});var E=z(o);n=F(E,ot),E.forEach(I),i.forEach(I),this.h()},h(){s(r,"fill","currentColor"),s(r,"d","M20 12a8 8 0 0 1-8 8v4a12 12 0 0 0 12-12h-4Zm-2-5.3a8 8 0 0 1 2 5.3h4c0-3-1.1-5.8-3-8l-3 2.7Z"),s(t,"xmlns","http://www.w3.org/2000/svg"),s(t,"fill","none"),s(t,"viewBox","0 0 24 24"),s(t,"class","animate-spin max-w-[3rem]"),s(o,"class","text-xs"),s(e,"class","loading svelte-1t0h0rs")},m(d,i){Z(d,e,i),p(e,t),p(t,r),p(e,a),p(e,o),p(o,n)},p:K,d(d){d&&I(e)}}}function Ot(l){let e,t,r=l[0]&&rt(l),a=l[1]&<();return{c(){e=C("div"),r&&r.c(),t=A(),a&&a.c(),this.h()},l(o){e=T(o,"DIV",{class:!0});var n=z(e);r&&r.l(n),t=D(n),a&&a.l(n),n.forEach(I),this.h()},h(){s(e,"class","relative overflow-clip flex flex-col justify-center items-center w-full h-full")},m(o,n){Z(o,e,n),r&&r.m(e,null),p(e,t),a&&a.m(e,null)},p(o,[n]){o[0]?r?r.p(o,n):(r=rt(o),r.c(),r.m(e,t)):r&&(r.d(1),r=null),o[1]?a?a.p(o,n):(a=lt(),a.c(),a.m(e,null)):a&&(a.d(1),a=null)},i:K,o:K,d(o){o&&I(e),r&&r.d(),a&&a.d()}}}let ot="";async function Bt(l){return new Promise((e,t)=>{try{const r=document.createElement("a");r.download=`sucess-${Date.now()}.png`,r.target="_self",r.onclick=async a=>{r.href&&URL.revokeObjectURL(r.href),r.href=l},requestAnimationFrame(()=>{console.log("Downloading image."),r.click(),e(null)})}catch{t()}})}async function Rt(l,{prompt:e,modifier:t,steps:r,seed:a}){const o=await fetch("/predict",{method:"POST",headers:{"Content-Type":"application/json"},body:JSON.stringify({data:[l,e+". "+t,r,a.toString()]})});if(!o.ok)throw new Error("Prediction request failed.");return await o.text()}function jt(l,e,t){let r,a,o,n,d;return Y(l,Me,i=>t(2,r=i)),Y(l,je,i=>t(0,a=i)),Y(l,be,i=>t(1,o=i)),Y(l,Ie,i=>t(3,n=i)),Y(l,Ae,i=>t(4,d=i)),l.$$.update=()=>{l.$$.dirty&26&&(async()=>{if(o){const i=await Rt(d.toDataURL(),n);X(je,a=i,a),X(be,o=!1,o)}})(),l.$$.dirty&5&&(async()=>r&&(await Bt(a),X(Me,r=!1,r)))()},[a,o,r,n,d]}class At extends se{constructor(e){super(),ne(this,e,jt,Ot,ie,{})}}function Dt(l){let e,t,r,a,o,n,d,i,k,E,b,P,g,v,x,u,f,h,c,m,w,y,M,_,S,N,R,q,$,J,te,W;return P=new gt({}),x=new zt({}),f=new At({}),R=new ft({}),$=new wt({}),{c(){e=C("div"),t=C("article"),r=C("h1"),a=V("Drawing to Map"),o=A(),n=C("p"),d=V("This space is for the ControlNet model "),i=C("a"),k=C("span"),E=V("Drawing2Map"),b=A(),ue(P.$$.fragment),g=A(),v=C("div"),ue(x.$$.fragment),u=A(),ue(f.$$.fragment),h=A(),c=C("button"),m=V("Generate Map"),y=A(),M=C("button"),_=V("Save Result"),N=A(),ue(R.$$.fragment),q=A(),ue($.$$.fragment),this.h()},l(O){e=T(O,"DIV",{class:!0});var L=z(e);t=T(L,"ARTICLE",{class:!0});var re=z(t);r=T(re,"H1",{});var ee=z(r);a=F(ee,"Drawing to Map"),ee.forEach(I),o=D(re),n=T(re,"P",{});var le=z(n);d=F(le,"This space is for the ControlNet model "),i=T(le,"A",{href:!0,target:!0});var ce=z(i);k=T(ce,"SPAN",{});var ye=z(k);E=F(ye,"Drawing2Map"),ye.forEach(I),ce.forEach(I),le.forEach(I),re.forEach(I),b=D(L),pe(P.$$.fragment,L),g=D(L),v=T(L,"DIV",{class:!0});var Q=z(v);pe(x.$$.fragment,Q),u=D(Q),pe(f.$$.fragment,Q),Q.forEach(I),h=D(L),c=T(L,"BUTTON",{class:!0});var G=z(c);m=F(G,"Generate Map"),G.forEach(I),y=D(L),M=T(L,"BUTTON",{class:!0});var j=z(M);_=F(j,"Save Result"),j.forEach(I),N=D(L),pe(R.$$.fragment,L),q=D(L),pe($.$$.fragment,L),L.forEach(I),this.h()},h(){s(i,"href","https://github.com/RubenGres/Drawing2Map"),s(i,"target","_blank"),s(t,"class","prose"),s(v,"class","drawings py-3 -mx-3 svelte-1sy339h"),c.disabled=w=l[0]===!0,s(c,"class","green svelte-1sy339h"),M.disabled=S=l[1]===!0||!l[2],s(M,"class","svelte-1sy339h"),s(e,"class","max-w-screen-md mx-auto px-3 py-5 relative z-0")},m(O,L){Z(O,e,L),p(e,t),p(t,r),p(r,a),p(t,o),p(t,n),p(n,d),p(n,i),p(i,k),p(k,E),p(e,b),de(P,e,null),p(e,g),p(e,v),de(x,v,null),p(v,u),de(f,v,null),p(e,h),p(e,c),p(c,m),p(e,y),p(e,M),p(M,_),p(e,N),de(R,e,null),p(e,q),de($,e,null),J=!0,te||(W=[U(c,"click",Se(l[3])),U(M,"click",Se(l[4]))],te=!0)},p(O,[L]){(!J||L&1&&w!==(w=O[0]===!0))&&(c.disabled=w),(!J||L&6&&S!==(S=O[1]===!0||!O[2]))&&(M.disabled=S)},i(O){J||(he(P.$$.fragment,O),he(x.$$.fragment,O),he(f.$$.fragment,O),he(R.$$.fragment,O),he($.$$.fragment,O),J=!0)},o(O){fe(P.$$.fragment,O),fe(x.$$.fragment,O),fe(f.$$.fragment,O),fe(R.$$.fragment,O),fe($.$$.fragment,O),J=!1},d(O){O&&I(e),ve(P),ve(x),ve(f),ve(R),ve($),te=!1,Ue(W)}}}function Nt(l,e,t){let r,a,o;return Y(l,be,i=>t(0,r=i)),Y(l,Me,i=>t(1,a=i)),Y(l,je,i=>t(2,o=i)),[r,a,o,()=>X(be,r=!0,r),()=>X(Me,a=!0,a)]}class Vt extends se{constructor(e){super(),ne(this,e,Nt,Dt,ie,{})}}export{Vt as default}; diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/yolo_head.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/yolo_head.py deleted file mode 100644 index b446cb7eb24b6608ba217713a36c917dc4b93407..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/yolo_head.py +++ /dev/null @@ -1,621 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2019 Western Digital Corporation or its affiliates. - -import warnings - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import (ConvModule, bias_init_with_prob, constant_init, is_norm, - normal_init) -from mmcv.runner import force_fp32 - -from mmdet.core import (build_assigner, build_bbox_coder, - build_prior_generator, build_sampler, images_to_levels, - multi_apply, multiclass_nms) -from ..builder import HEADS, build_loss -from .base_dense_head import BaseDenseHead -from .dense_test_mixins import BBoxTestMixin - - -@HEADS.register_module() -class YOLOV3Head(BaseDenseHead, BBoxTestMixin): - """YOLOV3Head Paper link: https://arxiv.org/abs/1804.02767. - - Args: - num_classes (int): The number of object classes (w/o background) - in_channels (List[int]): Number of input channels per scale. - out_channels (List[int]): The number of output channels per scale - before the final 1x1 layer. Default: (1024, 512, 256). - anchor_generator (dict): Config dict for anchor generator - bbox_coder (dict): Config of bounding box coder. - featmap_strides (List[int]): The stride of each scale. - Should be in descending order. Default: (32, 16, 8). - one_hot_smoother (float): Set a non-zero value to enable label-smooth - Default: 0. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True) - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - loss_cls (dict): Config of classification loss. - loss_conf (dict): Config of confidence loss. - loss_xy (dict): Config of xy coordinate loss. - loss_wh (dict): Config of wh coordinate loss. - train_cfg (dict): Training config of YOLOV3 head. Default: None. - test_cfg (dict): Testing config of YOLOV3 head. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - num_classes, - in_channels, - out_channels=(1024, 512, 256), - anchor_generator=dict( - type='YOLOAnchorGenerator', - base_sizes=[[(116, 90), (156, 198), (373, 326)], - [(30, 61), (62, 45), (59, 119)], - [(10, 13), (16, 30), (33, 23)]], - strides=[32, 16, 8]), - bbox_coder=dict(type='YOLOBBoxCoder'), - featmap_strides=[32, 16, 8], - one_hot_smoother=0., - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - loss_conf=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - loss_xy=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - loss_wh=dict(type='MSELoss', loss_weight=1.0), - train_cfg=None, - test_cfg=None, - init_cfg=dict( - type='Normal', std=0.01, - override=dict(name='convs_pred'))): - super(YOLOV3Head, self).__init__(init_cfg) - # Check params - assert (len(in_channels) == len(out_channels) == len(featmap_strides)) - - self.num_classes = num_classes - self.in_channels = in_channels - self.out_channels = out_channels - self.featmap_strides = featmap_strides - self.train_cfg = train_cfg - self.test_cfg = test_cfg - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - if hasattr(self.train_cfg, 'sampler'): - sampler_cfg = self.train_cfg.sampler - else: - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.fp16_enabled = False - - self.one_hot_smoother = one_hot_smoother - - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - - self.bbox_coder = build_bbox_coder(bbox_coder) - - self.prior_generator = build_prior_generator(anchor_generator) - - self.loss_cls = build_loss(loss_cls) - self.loss_conf = build_loss(loss_conf) - self.loss_xy = build_loss(loss_xy) - self.loss_wh = build_loss(loss_wh) - - self.num_base_priors = self.prior_generator.num_base_priors[0] - assert len( - self.prior_generator.num_base_priors) == len(featmap_strides) - self._init_layers() - - @property - def anchor_generator(self): - - warnings.warn('DeprecationWarning: `anchor_generator` is deprecated, ' - 'please use "prior_generator" instead') - return self.prior_generator - - @property - def num_anchors(self): - """ - Returns: - int: Number of anchors on each point of feature map. - """ - warnings.warn('DeprecationWarning: `num_anchors` is deprecated, ' - 'please use "num_base_priors" instead') - return self.num_base_priors - - @property - def num_levels(self): - return len(self.featmap_strides) - - @property - def num_attrib(self): - """int: number of attributes in pred_map, bboxes (4) + - objectness (1) + num_classes""" - - return 5 + self.num_classes - - def _init_layers(self): - self.convs_bridge = nn.ModuleList() - self.convs_pred = nn.ModuleList() - for i in range(self.num_levels): - conv_bridge = ConvModule( - self.in_channels[i], - self.out_channels[i], - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - conv_pred = nn.Conv2d(self.out_channels[i], - self.num_base_priors * self.num_attrib, 1) - - self.convs_bridge.append(conv_bridge) - self.convs_pred.append(conv_pred) - - def init_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - normal_init(m, mean=0, std=0.01) - if is_norm(m): - constant_init(m, 1) - - # Use prior in model initialization to improve stability - for conv_pred, stride in zip(self.convs_pred, self.featmap_strides): - bias = conv_pred.bias.reshape(self.num_base_priors, -1) - # init objectness with prior of 8 objects per feature map - # refer to https://github.com/ultralytics/yolov3 - nn.init.constant_(bias.data[:, 4], - bias_init_with_prob(8 / (608 / stride)**2)) - nn.init.constant_(bias.data[:, 5:], bias_init_with_prob(0.01)) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple[Tensor]: A tuple of multi-level predication map, each is a - 4D-tensor of shape (batch_size, 5+num_classes, height, width). - """ - - assert len(feats) == self.num_levels - pred_maps = [] - for i in range(self.num_levels): - x = feats[i] - x = self.convs_bridge[i](x) - pred_map = self.convs_pred[i](x) - pred_maps.append(pred_map) - - return tuple(pred_maps), - - @force_fp32(apply_to=('pred_maps', )) - def get_bboxes(self, - pred_maps, - img_metas, - cfg=None, - rescale=False, - with_nms=True): - """Transform network output for a batch into bbox predictions. It has - been accelerated since PR #5991. - - Args: - pred_maps (list[Tensor]): Raw predictions for a batch of images. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - cfg (mmcv.Config | None): Test / postprocessing configuration, - if None, test_cfg would be used. Default: None. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is an (n, 5) tensor, where 5 represent - (tl_x, tl_y, br_x, br_y, score) and the score between 0 and 1. - The shape of the second tensor in the tuple is (n,), and - each element represents the class label of the corresponding - box. - """ - assert len(pred_maps) == self.num_levels - cfg = self.test_cfg if cfg is None else cfg - scale_factors = np.array( - [img_meta['scale_factor'] for img_meta in img_metas]) - - num_imgs = len(img_metas) - featmap_sizes = [pred_map.shape[-2:] for pred_map in pred_maps] - - mlvl_anchors = self.prior_generator.grid_priors( - featmap_sizes, device=pred_maps[0].device) - flatten_preds = [] - flatten_strides = [] - for pred, stride in zip(pred_maps, self.featmap_strides): - pred = pred.permute(0, 2, 3, 1).reshape(num_imgs, -1, - self.num_attrib) - pred[..., :2].sigmoid_() - flatten_preds.append(pred) - flatten_strides.append( - pred.new_tensor(stride).expand(pred.size(1))) - - flatten_preds = torch.cat(flatten_preds, dim=1) - flatten_bbox_preds = flatten_preds[..., :4] - flatten_objectness = flatten_preds[..., 4].sigmoid() - flatten_cls_scores = flatten_preds[..., 5:].sigmoid() - flatten_anchors = torch.cat(mlvl_anchors) - flatten_strides = torch.cat(flatten_strides) - flatten_bboxes = self.bbox_coder.decode(flatten_anchors, - flatten_bbox_preds, - flatten_strides.unsqueeze(-1)) - - if with_nms and (flatten_objectness.size(0) == 0): - return torch.zeros((0, 5)), torch.zeros((0, )) - - if rescale: - flatten_bboxes /= flatten_bboxes.new_tensor( - scale_factors).unsqueeze(1) - - padding = flatten_bboxes.new_zeros(num_imgs, flatten_bboxes.shape[1], - 1) - flatten_cls_scores = torch.cat([flatten_cls_scores, padding], dim=-1) - - det_results = [] - for (bboxes, scores, objectness) in zip(flatten_bboxes, - flatten_cls_scores, - flatten_objectness): - # Filtering out all predictions with conf < conf_thr - conf_thr = cfg.get('conf_thr', -1) - if conf_thr > 0: - conf_inds = objectness >= conf_thr - bboxes = bboxes[conf_inds, :] - scores = scores[conf_inds, :] - objectness = objectness[conf_inds] - - det_bboxes, det_labels = multiclass_nms( - bboxes, - scores, - cfg.score_thr, - cfg.nms, - cfg.max_per_img, - score_factors=objectness) - det_results.append(tuple([det_bboxes, det_labels])) - return det_results - - @force_fp32(apply_to=('pred_maps', )) - def loss(self, - pred_maps, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - pred_maps (list[Tensor]): Prediction map for each scale level, - shape (N, num_anchors * num_attrib, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - num_imgs = len(img_metas) - device = pred_maps[0][0].device - - featmap_sizes = [ - pred_maps[i].shape[-2:] for i in range(self.num_levels) - ] - mlvl_anchors = self.prior_generator.grid_priors( - featmap_sizes, device=device) - anchor_list = [mlvl_anchors for _ in range(num_imgs)] - - responsible_flag_list = [] - for img_id in range(len(img_metas)): - responsible_flag_list.append( - self.prior_generator.responsible_flags(featmap_sizes, - gt_bboxes[img_id], - device)) - - target_maps_list, neg_maps_list = self.get_targets( - anchor_list, responsible_flag_list, gt_bboxes, gt_labels) - - losses_cls, losses_conf, losses_xy, losses_wh = multi_apply( - self.loss_single, pred_maps, target_maps_list, neg_maps_list) - - return dict( - loss_cls=losses_cls, - loss_conf=losses_conf, - loss_xy=losses_xy, - loss_wh=losses_wh) - - def loss_single(self, pred_map, target_map, neg_map): - """Compute loss of a single image from a batch. - - Args: - pred_map (Tensor): Raw predictions for a single level. - target_map (Tensor): The Ground-Truth target for a single level. - neg_map (Tensor): The negative masks for a single level. - - Returns: - tuple: - loss_cls (Tensor): Classification loss. - loss_conf (Tensor): Confidence loss. - loss_xy (Tensor): Regression loss of x, y coordinate. - loss_wh (Tensor): Regression loss of w, h coordinate. - """ - - num_imgs = len(pred_map) - pred_map = pred_map.permute(0, 2, 3, - 1).reshape(num_imgs, -1, self.num_attrib) - neg_mask = neg_map.float() - pos_mask = target_map[..., 4] - pos_and_neg_mask = neg_mask + pos_mask - pos_mask = pos_mask.unsqueeze(dim=-1) - if torch.max(pos_and_neg_mask) > 1.: - warnings.warn('There is overlap between pos and neg sample.') - pos_and_neg_mask = pos_and_neg_mask.clamp(min=0., max=1.) - - pred_xy = pred_map[..., :2] - pred_wh = pred_map[..., 2:4] - pred_conf = pred_map[..., 4] - pred_label = pred_map[..., 5:] - - target_xy = target_map[..., :2] - target_wh = target_map[..., 2:4] - target_conf = target_map[..., 4] - target_label = target_map[..., 5:] - - loss_cls = self.loss_cls(pred_label, target_label, weight=pos_mask) - loss_conf = self.loss_conf( - pred_conf, target_conf, weight=pos_and_neg_mask) - loss_xy = self.loss_xy(pred_xy, target_xy, weight=pos_mask) - loss_wh = self.loss_wh(pred_wh, target_wh, weight=pos_mask) - - return loss_cls, loss_conf, loss_xy, loss_wh - - def get_targets(self, anchor_list, responsible_flag_list, gt_bboxes_list, - gt_labels_list): - """Compute target maps for anchors in multiple images. - - Args: - anchor_list (list[list[Tensor]]): Multi level anchors of each - image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_total_anchors, 4). - responsible_flag_list (list[list[Tensor]]): Multi level responsible - flags of each image. Each element is a tensor of shape - (num_total_anchors, ) - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - gt_labels_list (list[Tensor]): Ground truth labels of each box. - - Returns: - tuple: Usually returns a tuple containing learning targets. - - target_map_list (list[Tensor]): Target map of each level. - - neg_map_list (list[Tensor]): Negative map of each level. - """ - num_imgs = len(anchor_list) - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - - results = multi_apply(self._get_targets_single, anchor_list, - responsible_flag_list, gt_bboxes_list, - gt_labels_list) - - all_target_maps, all_neg_maps = results - assert num_imgs == len(all_target_maps) == len(all_neg_maps) - target_maps_list = images_to_levels(all_target_maps, num_level_anchors) - neg_maps_list = images_to_levels(all_neg_maps, num_level_anchors) - - return target_maps_list, neg_maps_list - - def _get_targets_single(self, anchors, responsible_flags, gt_bboxes, - gt_labels): - """Generate matching bounding box prior and converted GT. - - Args: - anchors (list[Tensor]): Multi-level anchors of the image. - responsible_flags (list[Tensor]): Multi-level responsible flags of - anchors - gt_bboxes (Tensor): Ground truth bboxes of single image. - gt_labels (Tensor): Ground truth labels of single image. - - Returns: - tuple: - target_map (Tensor): Predication target map of each - scale level, shape (num_total_anchors, - 5+num_classes) - neg_map (Tensor): Negative map of each scale level, - shape (num_total_anchors,) - """ - - anchor_strides = [] - for i in range(len(anchors)): - anchor_strides.append( - torch.tensor(self.featmap_strides[i], - device=gt_bboxes.device).repeat(len(anchors[i]))) - concat_anchors = torch.cat(anchors) - concat_responsible_flags = torch.cat(responsible_flags) - - anchor_strides = torch.cat(anchor_strides) - assert len(anchor_strides) == len(concat_anchors) == \ - len(concat_responsible_flags) - assign_result = self.assigner.assign(concat_anchors, - concat_responsible_flags, - gt_bboxes) - sampling_result = self.sampler.sample(assign_result, concat_anchors, - gt_bboxes) - - target_map = concat_anchors.new_zeros( - concat_anchors.size(0), self.num_attrib) - - target_map[sampling_result.pos_inds, :4] = self.bbox_coder.encode( - sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes, - anchor_strides[sampling_result.pos_inds]) - - target_map[sampling_result.pos_inds, 4] = 1 - - gt_labels_one_hot = F.one_hot( - gt_labels, num_classes=self.num_classes).float() - if self.one_hot_smoother != 0: # label smooth - gt_labels_one_hot = gt_labels_one_hot * ( - 1 - self.one_hot_smoother - ) + self.one_hot_smoother / self.num_classes - target_map[sampling_result.pos_inds, 5:] = gt_labels_one_hot[ - sampling_result.pos_assigned_gt_inds] - - neg_map = concat_anchors.new_zeros( - concat_anchors.size(0), dtype=torch.uint8) - neg_map[sampling_result.neg_inds] = 1 - - return target_map, neg_map - - def aug_test(self, feats, img_metas, rescale=False): - """Test function with test time augmentation. - - Args: - feats (list[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains features for all images in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. each dict has image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[ndarray]: bbox results of each class - """ - return self.aug_test_bboxes(feats, img_metas, rescale=rescale) - - @force_fp32(apply_to=('pred_maps')) - def onnx_export(self, pred_maps, img_metas, with_nms=True): - num_levels = len(pred_maps) - pred_maps_list = [pred_maps[i].detach() for i in range(num_levels)] - - cfg = self.test_cfg - assert len(pred_maps_list) == self.num_levels - - device = pred_maps_list[0].device - batch_size = pred_maps_list[0].shape[0] - - featmap_sizes = [ - pred_maps_list[i].shape[-2:] for i in range(self.num_levels) - ] - mlvl_anchors = self.prior_generator.grid_priors( - featmap_sizes, device=device) - # convert to tensor to keep tracing - nms_pre_tensor = torch.tensor( - cfg.get('nms_pre', -1), device=device, dtype=torch.long) - - multi_lvl_bboxes = [] - multi_lvl_cls_scores = [] - multi_lvl_conf_scores = [] - for i in range(self.num_levels): - # get some key info for current scale - pred_map = pred_maps_list[i] - stride = self.featmap_strides[i] - # (b,h, w, num_anchors*num_attrib) -> - # (b,h*w*num_anchors, num_attrib) - pred_map = pred_map.permute(0, 2, 3, - 1).reshape(batch_size, -1, - self.num_attrib) - # Inplace operation like - # ```pred_map[..., :2] = \torch.sigmoid(pred_map[..., :2])``` - # would create constant tensor when exporting to onnx - pred_map_conf = torch.sigmoid(pred_map[..., :2]) - pred_map_rest = pred_map[..., 2:] - pred_map = torch.cat([pred_map_conf, pred_map_rest], dim=-1) - pred_map_boxes = pred_map[..., :4] - multi_lvl_anchor = mlvl_anchors[i] - multi_lvl_anchor = multi_lvl_anchor.expand_as(pred_map_boxes) - bbox_pred = self.bbox_coder.decode(multi_lvl_anchor, - pred_map_boxes, stride) - # conf and cls - conf_pred = torch.sigmoid(pred_map[..., 4]) - cls_pred = torch.sigmoid(pred_map[..., 5:]).view( - batch_size, -1, self.num_classes) # Cls pred one-hot. - - # Get top-k prediction - from mmdet.core.export import get_k_for_topk - nms_pre = get_k_for_topk(nms_pre_tensor, bbox_pred.shape[1]) - if nms_pre > 0: - _, topk_inds = conf_pred.topk(nms_pre) - batch_inds = torch.arange(batch_size).view( - -1, 1).expand_as(topk_inds).long() - # Avoid onnx2tensorrt issue in https://github.com/NVIDIA/TensorRT/issues/1134 # noqa: E501 - transformed_inds = ( - bbox_pred.shape[1] * batch_inds + topk_inds) - bbox_pred = bbox_pred.reshape(-1, - 4)[transformed_inds, :].reshape( - batch_size, -1, 4) - cls_pred = cls_pred.reshape( - -1, self.num_classes)[transformed_inds, :].reshape( - batch_size, -1, self.num_classes) - conf_pred = conf_pred.reshape(-1, 1)[transformed_inds].reshape( - batch_size, -1) - - # Save the result of current scale - multi_lvl_bboxes.append(bbox_pred) - multi_lvl_cls_scores.append(cls_pred) - multi_lvl_conf_scores.append(conf_pred) - - # Merge the results of different scales together - batch_mlvl_bboxes = torch.cat(multi_lvl_bboxes, dim=1) - batch_mlvl_scores = torch.cat(multi_lvl_cls_scores, dim=1) - batch_mlvl_conf_scores = torch.cat(multi_lvl_conf_scores, dim=1) - - # Replace multiclass_nms with ONNX::NonMaxSuppression in deployment - from mmdet.core.export import add_dummy_nms_for_onnx - conf_thr = cfg.get('conf_thr', -1) - score_thr = cfg.get('score_thr', -1) - # follow original pipeline of YOLOv3 - if conf_thr > 0: - mask = (batch_mlvl_conf_scores >= conf_thr).float() - batch_mlvl_conf_scores *= mask - if score_thr > 0: - mask = (batch_mlvl_scores > score_thr).float() - batch_mlvl_scores *= mask - batch_mlvl_conf_scores = batch_mlvl_conf_scores.unsqueeze(2).expand_as( - batch_mlvl_scores) - batch_mlvl_scores = batch_mlvl_scores * batch_mlvl_conf_scores - if with_nms: - max_output_boxes_per_class = cfg.nms.get( - 'max_output_boxes_per_class', 200) - iou_threshold = cfg.nms.get('iou_threshold', 0.5) - # keep aligned with original pipeline, improve - # mAP by 1% for YOLOv3 in ONNX - score_threshold = 0 - nms_pre = cfg.get('deploy_nms_pre', -1) - return add_dummy_nms_for_onnx( - batch_mlvl_bboxes, - batch_mlvl_scores, - max_output_boxes_per_class, - iou_threshold, - score_threshold, - nms_pre, - cfg.max_per_img, - ) - else: - return batch_mlvl_bboxes, batch_mlvl_scores diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/losses/smooth_l1_loss.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/losses/smooth_l1_loss.py deleted file mode 100644 index 551174672933cb0d23c93cbe22053e3910a9dcfb..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/losses/smooth_l1_loss.py +++ /dev/null @@ -1,146 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch -import torch.nn as nn - -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def smooth_l1_loss(pred, target, beta=1.0): - """Smooth L1 loss. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - beta (float, optional): The threshold in the piecewise function. - Defaults to 1.0. - - Returns: - torch.Tensor: Calculated loss - """ - assert beta > 0 - if target.numel() == 0: - return pred.sum() * 0 - - assert pred.size() == target.size() - diff = torch.abs(pred - target) - loss = torch.where(diff < beta, 0.5 * diff * diff / beta, - diff - 0.5 * beta) - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def l1_loss(pred, target): - """L1 loss. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - - Returns: - torch.Tensor: Calculated loss - """ - if target.numel() == 0: - return pred.sum() * 0 - - assert pred.size() == target.size() - loss = torch.abs(pred - target) - return loss - - -@LOSSES.register_module() -class SmoothL1Loss(nn.Module): - """Smooth L1 loss. - - Args: - beta (float, optional): The threshold in the piecewise function. - Defaults to 1.0. - reduction (str, optional): The method to reduce the loss. - Options are "none", "mean" and "sum". Defaults to "mean". - loss_weight (float, optional): The weight of loss. - """ - - def __init__(self, beta=1.0, reduction='mean', loss_weight=1.0): - super(SmoothL1Loss, self).__init__() - self.beta = beta - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_bbox = self.loss_weight * smooth_l1_loss( - pred, - target, - weight, - beta=self.beta, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss_bbox - - -@LOSSES.register_module() -class L1Loss(nn.Module): - """L1 loss. - - Args: - reduction (str, optional): The method to reduce the loss. - Options are "none", "mean" and "sum". - loss_weight (float, optional): The weight of loss. - """ - - def __init__(self, reduction='mean', loss_weight=1.0): - super(L1Loss, self).__init__() - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_bbox = self.loss_weight * l1_loss( - pred, target, weight, reduction=reduction, avg_factor=avg_factor) - return loss_bbox diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/roi_heads/bbox_heads/scnet_bbox_head.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/roi_heads/bbox_heads/scnet_bbox_head.py deleted file mode 100644 index cf39ebef2fa26f69bb56e6d08384991975ad1cc2..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/roi_heads/bbox_heads/scnet_bbox_head.py +++ /dev/null @@ -1,77 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.models.builder import HEADS -from .convfc_bbox_head import ConvFCBBoxHead - - -@HEADS.register_module() -class SCNetBBoxHead(ConvFCBBoxHead): - """BBox head for `SCNet `_. - - This inherits ``ConvFCBBoxHead`` with modified forward() function, allow us - to get intermediate shared feature. - """ - - def _forward_shared(self, x): - """Forward function for shared part.""" - if self.num_shared_convs > 0: - for conv in self.shared_convs: - x = conv(x) - - if self.num_shared_fcs > 0: - if self.with_avg_pool: - x = self.avg_pool(x) - - x = x.flatten(1) - - for fc in self.shared_fcs: - x = self.relu(fc(x)) - - return x - - def _forward_cls_reg(self, x): - """Forward function for classification and regression parts.""" - x_cls = x - x_reg = x - - for conv in self.cls_convs: - x_cls = conv(x_cls) - if x_cls.dim() > 2: - if self.with_avg_pool: - x_cls = self.avg_pool(x_cls) - x_cls = x_cls.flatten(1) - for fc in self.cls_fcs: - x_cls = self.relu(fc(x_cls)) - - for conv in self.reg_convs: - x_reg = conv(x_reg) - if x_reg.dim() > 2: - if self.with_avg_pool: - x_reg = self.avg_pool(x_reg) - x_reg = x_reg.flatten(1) - for fc in self.reg_fcs: - x_reg = self.relu(fc(x_reg)) - - cls_score = self.fc_cls(x_cls) if self.with_cls else None - bbox_pred = self.fc_reg(x_reg) if self.with_reg else None - - return cls_score, bbox_pred - - def forward(self, x, return_shared_feat=False): - """Forward function. - - Args: - x (Tensor): input features - return_shared_feat (bool): If True, return cls-reg-shared feature. - - Return: - out (tuple[Tensor]): contain ``cls_score`` and ``bbox_pred``, - if ``return_shared_feat`` is True, append ``x_shared`` to the - returned tuple. - """ - x_shared = self._forward_shared(x) - out = self._forward_cls_reg(x_shared) - - if return_shared_feat: - out += (x_shared, ) - - return out diff --git a/spaces/rorallitri/biomedical-language-models/Nokia-Best-Bb5-Easy-Service-Tool-188-Crack-Extra-Quality.md b/spaces/rorallitri/biomedical-language-models/Nokia-Best-Bb5-Easy-Service-Tool-188-Crack-Extra-Quality.md deleted file mode 100644 index 22675825bad83e30602ae822853c9f7c6c03137d..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/Nokia-Best-Bb5-Easy-Service-Tool-188-Crack-Extra-Quality.md +++ /dev/null @@ -1,106 +0,0 @@ -## nokia best bb5 easy service tool 1.88 crack - - - - - - ![Nokia Best Bb5 Easy Service Tool 1.88 Crack Extra Quality](https://1.bp.blogspot.com/_DB3mghkhzX8/ShzXXCL-G7I/AAAAAAAAAFM/-y74nriGg5A/S1600-R/BB5.jpg) - - - - - -**CLICK HERE [https://denirade.blogspot.com/?download=2txosf](https://denirade.blogspot.com/?download=2txosf)** - - - - - - - - - - - - Here is a possible title and article with html formatting for the keyword "nokia best bb5 easy service tool 1.88 crack": - -# How to Download and Use Nokia Best BB5 Easy Service Tool 1.88 Crack Without Box - - - -Nokia Best BB5 Easy Service Tool is a software that allows you to flash, unlock and repair your Nokia phones powered by BB5, MeeGo, MediaTek and NXPlatform chipsets. It also helps you to read and reset user code, life timer, product profile, self-test and many other functions. However, to use this tool, you need an Infinity BEST box or dongle, which can be expensive and hard to find. In this article, we will show you how to download and use Nokia Best BB5 Easy Service Tool 1.88 Crack without box for free. - - - -## Download Nokia Best BB5 Easy Service Tool 1.88 Crack - - - -To download Nokia Best BB5 Easy Service Tool 1.88 Crack, you can follow these steps: - - - -1. Click on this link[^1^] to go to the download page of CFirmware. - -2. Scroll down and click on the "Download" button. - -3. Wait for a few seconds until the file link is fetched from the download server. - -4. Click on the file link to start downloading the zip package. - -5. Extract the zip package using WinRAR or any other extraction tool. - -6. You will get a folder named "InfinityBox\_install\_BEST\_v2.26\_Cracked" which contains the setup file and the crack file. - - - -## Install Nokia Best BB5 Easy Service Tool 1.88 Crack - - - -To install Nokia Best BB5 Easy Service Tool 1.88 Crack, you can follow these steps: - - - -1. Run the setup file named "InfinityBox\_install\_BEST\_v2.26.exe" as administrator. - -2. Follow the installation wizard and choose the destination folder. - -3. After the installation is completed, do not run the tool yet. - -4. Copy the crack file named "BEST.exe" from the folder and paste it into the installation folder (usually C:\Program Files\InfinityBox\BEST). - -5. Replace the original file with the crack file when prompted. - -6. You have successfully installed Nokia Best BB5 Easy Service Tool 1.88 Crack on your computer. - - - -## Use Nokia Best BB5 Easy Service Tool 1.88 Crack - - - -To use Nokia Best BB5 Easy Service Tool 1.88 Crack, you can follow these steps: - - - -1. Run the tool by double-clicking on "BEST.exe" from the installation folder. - -2. Select the platform (BB5, MeeGo, MediaTek or NXPlatform) of your Nokia phone from the drop-down menu. - -3. Connect your Nokia phone to your computer via USB cable in flash mode or test mode depending on what you want to do. - -4. Go to the "Flashing" tab if you want to flash firmware on your phone. Choose the firmware file from your computer and click on "Flash". Wait for the process to complete. - -5. Go to the "Service" tab if you want to unlock or repair your phone. Choose the option you want (such as Reset User Code, Reset Life Timer, etc.) and click on it. Wait for the process to complete. - -6. You have successfully used Nokia Best BB5 Easy Service Tool 1.88 Crack without box for free. - - - - dfd1c89656 - - - - - diff --git a/spaces/rorallitri/biomedical-language-models/logs/Bluestacks 6.1.6.5643 Mod Rooted Offline Installer.md b/spaces/rorallitri/biomedical-language-models/logs/Bluestacks 6.1.6.5643 Mod Rooted Offline Installer.md deleted file mode 100644 index cfa299736eedbd2619514b0346f0f0492adecadc..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Bluestacks 6.1.6.5643 Mod Rooted Offline Installer.md +++ /dev/null @@ -1,6 +0,0 @@ -

                Bluestacks 6.1.6.5643 Mod Rooted {Offline Installer}


                Download Filehttps://tinurll.com/2uzo6N



                - -Bluestacks 6.1.6.5643 Mod Rooted Offline Installer · Crack Lego Piratas Del Caribe Pc 39 · Young Goodman Brown a.k.a Download songs ... 1fdad05405
                -
                -
                -

                diff --git a/spaces/rorallitri/biomedical-language-models/logs/Fsync Performance on Storage Devices Tips and Tricks for Improving Database Consistency and Durability.md b/spaces/rorallitri/biomedical-language-models/logs/Fsync Performance on Storage Devices Tips and Tricks for Improving Database Consistency and Durability.md deleted file mode 100644 index 488cda8e3845c8b807ed830e66d179de4166466e..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Fsync Performance on Storage Devices Tips and Tricks for Improving Database Consistency and Durability.md +++ /dev/null @@ -1,25 +0,0 @@ - -

                Because the fsync call takes time, it greatly affects the performance of MySQL. Because of this, you probably noticed there are many status variables that relate to fsyncs. To overcome the inherent limitations of the storage devices, group commit allows multiple simultaneous transactions to fsync the log file once for all the transactions waiting for the fsync. There is no need for a transaction to call fsync for a write operation that another transaction already forced to disk. A series of write transactions sent over a single database connection cannot benefit from group commit.

                -

                With the above number, the possible transaction rates in fully ACID mode is pretty depressing. But those drives were rotating ones, what about SSD drives? SSD are memory devices and are much faster for random IO operations. There are extremely fast for reads and good for writes. But as you will see below, not that great for fsyncs.

                -

                Fsync Performance on Storage Devices


                Download Ziphttps://tinurll.com/2uzn4l



                -

                The fsync system is not the only system call that persists data to disk. There is also the fdatasync call. fdatasync persists the data to disk but does not update the metadata information like the file size and last update time. Said otherwise, it performs one write operation instead of two. In the Python script, if I replace os.fsync with os.fdatasync, here are the results for a subset of devices:

                -

                I tested your fsync.py using our SAN HPE 3PAR StoreServ 8400 storage.
                It is relatively high level flash-based storage device.
                10 000 iterations took 19.303s or 1.903 ms per fsync (~ 518 fsyncs / seconds).

                -

                Upon checkpoint, dirty buffers in shared buffers are written to the page cache managed by kernel. Through an fsync(), these modified blocks are applied to disk. If an fsync() call is successful, all dirty pages from the corresponding file are guaranteed to be persisted on the disk. When there is an fsync to flush the pages to disk, PostgreSQL cannot guarantee a copy of a modified/dirty page. The reason is that writes to storage from the page cache are completely managed by the kernel, and not by PostgreSQL.

                -

                In a fully durable configuration MySQL tends to be impacted even more by poor fsync() performance. It may need to perform as many as three fsync operations per transaction commit. Group commit reduces the impact on throughput but transaction latency will still be severely impacted

                -

                With the above number, the possible transaction rates in fully ACID mode is pretty depressing. But those drives were rotating ones, what about SSD drives? SSD are memory devices and are much faster for random IO operations. There are extremely fast for reads, and good for writes. But as you will see below, not that great for fsyncs.

                -

                A few years ago, Intel introduced a new type of storage devices based on the 3D_XPoint technology and sold under the Optane brand. Those devices are outperforming regular flash devices and have higher endurance. In the context of this post, I found they are also very good at handling the fsync call, something many flash devices are not great at doing.

                -

                The above results are pretty amazing. The fsync performance is on par with a RAID controller with a write cache, for which I got a rate of 23000/s and is much better than a regular NAND based NVMe card like the Intel PC-3700, able to deliver a fsync rate of 7300/s. Even enabling the full ext4 journal, the rate is still excellent although, as expected, cut by about half.

                -

                -

                If you have a large dataset, you can still use the Optane card as a read/write cache and improve fsync performance significantly. I did some tests with two easily available solutions, dm-cache and bcache. In both cases, the Optane card was put in front of an external USB Sata disk and the cache layer set to writeback.

                -

                In my previous post Testing Samsung storage in tpcc-mysql benchmark of Percona Server I compared different Samsung devices. Most solid state drives (SSDs) use 4KiB as an internal page size, and the InnoDB default page size is 16KiB. I wondered how using a different innodb_page_size might affect the overall performance.

                -

                While working on the service architecture for one of our projects, I considered several SATA SSD options as the possible main storage for the data. The system will be quite write intensive, so the main interest is the write performance on capacities close to full-size storage.

                -

                Persistent disks are networked storage and generally have higher latencycompared to physical disks or local SSDs.To reach the maximum performance limits of your persistent disks, you mustissue enough I/O requests in parallel. To check if you're using a high enoughqueue depth to reach your required performance levels, seeI/O queue depth.

                -

                NAND flash memory has been used widely in various mobile devices like smartphones, tablets and MP3 players. Furthermore, server systems started utilizing flash devices as their primary storage. Despite its broad use, flash memory has several limitations, like erase-before-write requirement, the need to write on erased blocks sequentially and limited write cycles per erase block.

                -

                These file systems directly access NAND flash memories while addressing all the chip-level issues such as wear-levelling and bad-block management. Unlike these systems, F2FS targets flash storage devices that come with a dedicated controller and firmware (FTL) to handle low-level tasks. Such flash storage devices are more commonplace.

                -

                F2FS was designed from scratch to optimize the performance and lifetime of flash devices with a generic block interface. It builds on the concept of the Log-Structured Filesystem (LFS), but also introduces a number of new design considerations:

                -

                Applications like database (e.g., SQLite) frequently write small data to a file and conduct fsync to guarantee durability. A naive approach to supporting fsync would be to trigger checkpointing and recover data with the roll-back model. However, this approach leads to poor performance, as checkpointing involves writing all node and dentry blocks unrelated to the database file. F2FS implements an efficient roll-forward recovery mechanism to enhance fsync performance. The key idea is to write data blocks and their direct node blocks only, excluding other node or F2FS metadata blocks. In order to find the data blocks selectively after rolling back to the stable checkpoint, F2FS retains a special flag inside direct node blocks.

                -

                Experimental results showed that adaptive logging is critical to sustain performance at high storage utilization levels. The adaptive logging policy is also shown to effectively limit the performance degradation of F2FS due to fragmentation.

                -

                Some storage optimized instance types provide the ability to control processor C-states and P-states on Linux. C-states control the sleep levels that a core can enter when it is inactive, while P-states control the desired performance (in CPU frequency) from a core. For more information, see Processor state control for your EC2 instance.

                -

                SSD controllers can use several strategies to reduce the impact of write amplification. One such strategy is to reserve space in the SSD instance storage so that the controller can more efficiently manage the space available for write operations. This is called over-provisioning. The SSD-based instance store volumes provided to an instance don't have any space reserved for over-provisioning. To reduce write amplification, we recommend that you leave 10% of the volume unpartitioned so that the SSD controller can use it for over-provisioning. This decreases the storage that you can use, but increases performance even if the disk is close to full capacity.

                aaccfb2cb3
                -
                -
                \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Garden Planner 3.5.20 Key - _BEST_ Crackingpatching Serial Key Keygen [PATCHED].md b/spaces/rorallitri/biomedical-language-models/logs/Garden Planner 3.5.20 Key - _BEST_ Crackingpatching Serial Key Keygen [PATCHED].md deleted file mode 100644 index e3840ca48d1b69ba07dc66d38c6210aafb078595..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Garden Planner 3.5.20 Key - _BEST_ Crackingpatching Serial Key Keygen [PATCHED].md +++ /dev/null @@ -1,6 +0,0 @@ -

                Garden Planner 3.5.20 Key - Crackingpatching Serial Key Keygen [PATCHED]


                Download File ⚹⚹⚹ https://tinurll.com/2uznmH



                - - aaccfb2cb3
                -
                -
                -

                diff --git a/spaces/ruangguru/ds-chatbot-internal/README.md b/spaces/ruangguru/ds-chatbot-internal/README.md deleted file mode 100644 index 75a2e362ac07d8c3a859a9e13c295b4f0d9187d1..0000000000000000000000000000000000000000 --- a/spaces/ruangguru/ds-chatbot-internal/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Ds Chatbot Internal -emoji: 🌖 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.28.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/rushic24/Priyanka-Chopra-TTS/synthesis/synthesize.py b/spaces/rushic24/Priyanka-Chopra-TTS/synthesis/synthesize.py deleted file mode 100644 index 77170318de5bd6d225329a7e0b045b47a5c328b7..0000000000000000000000000000000000000000 --- a/spaces/rushic24/Priyanka-Chopra-TTS/synthesis/synthesize.py +++ /dev/null @@ -1,233 +0,0 @@ -import argparse -import os -import matplotlib.pyplot as plt -import torch -import numpy as np -import matplotlib -from scipy.io.wavfile import write -from os.path import dirname, abspath -import sys - -import nltk - -nltk.download("punkt") - -sys.path.append(dirname(dirname(abspath(__file__)))) -matplotlib.use("Agg") - -from training.tacotron2_model import Tacotron2 -from training.clean_text import clean_text -from training import DEFAULT_ALPHABET -from synthesis.vocoders import Hifigan - - -def load_model(model_path): - """ - Loads the Tacotron2 model. - Uses GPU if available, otherwise uses CPU. - - Parameters - ---------- - model_path : str - Path to tacotron2 model - - Returns - ------- - Tacotron2 - Loaded tacotron2 model - """ - if torch.cuda.is_available(): - model = Tacotron2().cuda() - model.load_state_dict(torch.load(model_path)["state_dict"]) - _ = model.cuda().eval().half() - else: - model = Tacotron2() - model.load_state_dict(torch.load(model_path, map_location=torch.device("cpu"))["state_dict"]) - return model - - -def generate_graph(alignments, filepath, heading=""): - """ - Generates synthesis alignment graph image. - - Parameters - ---------- - alignments : list - Numpy alignment data - filepath : str - Path to save image to - heading : str (optional) - Graph heading - """ - data = alignments.float().data.cpu().numpy()[0].T - plt.imshow(data, aspect="auto", origin="lower", interpolation="none") - if heading: - plt.title(heading) - plt.savefig(filepath) - - -def text_to_sequence(text, symbols): - """ - Generates text sequence for audio file - - Parameters - ---------- - text : str - Text to synthesize - symbols : list - List of valid symbols - """ - symbol_to_id = {s: i for i, s in enumerate(symbols)} - sequence = np.array([[symbol_to_id[s] for s in text if s in symbol_to_id]]) - if torch.cuda.is_available(): - return torch.autograd.Variable(torch.from_numpy(sequence)).cuda().long() - else: - return torch.autograd.Variable(torch.from_numpy(sequence)).cpu().long() - - -def join_alignment_graphs(alignments): - """ - Joins multiple alignment graphs. - - Parameters - ---------- - alignments : list - List of alignment Tensors - - Returns - ------- - Tensor - Combined alignment tensor - """ - alignment_sizes = [a.size() for a in alignments] - joined = torch.zeros((1, sum([a[1] for a in alignment_sizes]), sum([a[2] for a in alignment_sizes]))) - current_x = 0 - current_y = 0 - for alignment in alignments: - joined[:, current_x : current_x + alignment.size()[1], current_y : current_y + alignment.size()[2]] = alignment - current_x += alignment.size()[1] - current_y += alignment.size()[2] - return joined - - -def synthesize( - model, - text, - symbols=DEFAULT_ALPHABET, - graph_path=None, - audio_path=None, - vocoder=None, - silence_padding=0.15, - sample_rate=22050, - max_decoder_steps=1000, - split_text=False, -): - """ - Synthesise text for a given model. - Produces graph and/or audio file when given. - Supports multi line synthesis (seperated by \n). - - Parameters - ---------- - model : Tacotron2 - Tacotron2 model - text : str/list - Text to synthesize (or list of lines to synthesize) - symbols : list - List of symbols (default is English) - graph_path : str (optional) - Path to save alignment graph to - audio_path : str (optional) - Path to save audio file to - vocoder : Object (optional) - Vocoder model (required if generating audio) - silence_padding : float (optional) - Seconds of silence to seperate each clip by with multi-line synthesis (default is 0.15) - sample_rate : int (optional) - Audio sample rate (default is 22050) - max_decoder_steps : int (optional) - Max decoder steps controls sequence length and memory usage during inference. - Increasing this will use more memory but may allow for longer sentences. (default is 1000) - split_text : bool (optional) - Whether to use the split text tool to convert a block of text into multiple shorter sentences - to synthesize (default is True) - - Raises - ------- - AssertionError - If audio_path is given without a vocoder - """ - if audio_path: - assert vocoder, "Missing vocoder" - - if not isinstance(text, list) and split_text: - # Split text into multiple lines - text = nltk.tokenize.sent_tokenize(text) - - if isinstance(text, list): - # Multi-lines given - text = [line.strip() for line in text if line.strip()] - mels = [] - alignments = [] - for line in text: - text = clean_text(line, symbols) - sequence = text_to_sequence(text, symbols) - _, mel_outputs_postnet, _, alignment = model.inference(sequence, max_decoder_steps) - mels.append(mel_outputs_postnet) - alignments.append(alignment) - - if graph_path: - generate_graph(join_alignment_graphs(alignments), graph_path) - - if audio_path: - silence = np.zeros(int(silence_padding * sample_rate)).astype("int16") - audio_segments = [] - for i in range(len(mels)): - audio_segments.append(vocoder.generate_audio(mels[i])) - if i != len(mels) - 1: - audio_segments.append(silence) - - audio = np.concatenate(audio_segments) - write(audio_path, sample_rate, audio) - else: - # Single sentence - text = clean_text(text.strip(), symbols) - sequence = text_to_sequence(text, symbols) - _, mel_outputs_postnet, _, alignment = model.inference(sequence, max_decoder_steps) - - if graph_path: - generate_graph(alignment, graph_path) - - if audio_path: - audio = vocoder.generate_audio(mel_outputs_postnet) - write(audio_path, sample_rate, audio) - - -if __name__ == "__main__": - """Synthesize audio using model and vocoder""" - parser = argparse.ArgumentParser(description="Synthesize audio using model and vocoder") - parser.add_argument("-m", "--model_path", type=str, help="tacotron2 model path", required=True) - parser.add_argument("-vm", "--vocoder_model_path", type=str, help="vocoder model path", required=True) - parser.add_argument("-hc", "--hifigan_config_path", type=str, help="hifigan_config path", required=True) - parser.add_argument("-t", "--text", type=str, help="text to synthesize", required=True) - parser.add_argument("-g", "--graph_output_path", type=str, help="path to save alignment graph to", required=False) - parser.add_argument("-a", "--audio_output_path", type=str, help="path to save output audio to", required=False) - parser.add_argument("--silence_padding", type=float, help="Padding between sentences in seconds", default=0.15) - parser.add_argument("--sample_rate", type=int, help="Audio sample rate", default=22050) - args = parser.parse_args() - - assert os.path.isfile(args.model_path), "Model not found" - assert os.path.isfile(args.vocoder_model_path), "vocoder model not found" - - model = load_model(args.model_path) - vocoder = Hifigan(args.vocoder_model_path, args.hifigan_config_path) - - synthesize( - model=model, - text=args.text, - graph_path=args.graph_output_path, - audio_path=args.audio_output_path, - vocoder=vocoder, - silence_padding=args.silence_padding, - sample_rate=args.sample_rate, - ) diff --git a/spaces/rzzgate/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/pipeline_stable_diffusion_controlnet_inpaint.py b/spaces/rzzgate/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/pipeline_stable_diffusion_controlnet_inpaint.py deleted file mode 100644 index 8e961183802ae29d19b0df4da6d0da4aaba66bfb..0000000000000000000000000000000000000000 --- a/spaces/rzzgate/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/pipeline_stable_diffusion_controlnet_inpaint.py +++ /dev/null @@ -1,610 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import numpy as np -import PIL.Image -import torch -from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_controlnet import * - -# https://github.com/mikonvergence/ControlNetInpaint - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> # !pip install opencv-python transformers accelerate - >>> from diffusers import StableDiffusionControlNetInpaintPipeline, ControlNetModel, UniPCMultistepScheduler - >>> from diffusers.utils import load_image - >>> import numpy as np - >>> import torch - - >>> import cv2 - >>> from PIL import Image - >>> # download an image - >>> image = load_image( - ... "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" - ... ) - >>> image = np.array(image) - >>> mask_image = load_image( - ... "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" - ... ) - >>> mask_image = np.array(mask_image) - >>> # get canny image - >>> canny_image = cv2.Canny(image, 100, 200) - >>> canny_image = canny_image[:, :, None] - >>> canny_image = np.concatenate([canny_image, canny_image, canny_image], axis=2) - >>> canny_image = Image.fromarray(canny_image) - - >>> # load control net and stable diffusion v1-5 - >>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) - >>> pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained( - ... "runwayml/stable-diffusion-inpainting", controlnet=controlnet, torch_dtype=torch.float16 - ... ) - - >>> # speed up diffusion process with faster scheduler and memory optimization - >>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) - >>> # remove following line if xformers is not installed - >>> pipe.enable_xformers_memory_efficient_attention() - - >>> pipe.enable_model_cpu_offload() - - >>> # generate image - >>> generator = torch.manual_seed(0) - >>> image = pipe( - ... "futuristic-looking doggo", - ... num_inference_steps=20, - ... generator=generator, - ... image=image, - ... control_image=canny_image, - ... mask_image=mask_image - ... ).images[0] - ``` -""" - - -def prepare_mask_and_masked_image(image, mask): - """ - Prepares a pair (image, mask) to be consumed by the Stable Diffusion pipeline. This means that those inputs will be - converted to ``torch.Tensor`` with shapes ``batch x channels x height x width`` where ``channels`` is ``3`` for the - ``image`` and ``1`` for the ``mask``. - The ``image`` will be converted to ``torch.float32`` and normalized to be in ``[-1, 1]``. The ``mask`` will be - binarized (``mask > 0.5``) and cast to ``torch.float32`` too. - Args: - image (Union[np.array, PIL.Image, torch.Tensor]): The image to inpaint. - It can be a ``PIL.Image``, or a ``height x width x 3`` ``np.array`` or a ``channels x height x width`` - ``torch.Tensor`` or a ``batch x channels x height x width`` ``torch.Tensor``. - mask (_type_): The mask to apply to the image, i.e. regions to inpaint. - It can be a ``PIL.Image``, or a ``height x width`` ``np.array`` or a ``1 x height x width`` - ``torch.Tensor`` or a ``batch x 1 x height x width`` ``torch.Tensor``. - Raises: - ValueError: ``torch.Tensor`` images should be in the ``[-1, 1]`` range. ValueError: ``torch.Tensor`` mask - should be in the ``[0, 1]`` range. ValueError: ``mask`` and ``image`` should have the same spatial dimensions. - TypeError: ``mask`` is a ``torch.Tensor`` but ``image`` is not - (ot the other way around). - Returns: - tuple[torch.Tensor]: The pair (mask, masked_image) as ``torch.Tensor`` with 4 - dimensions: ``batch x channels x height x width``. - """ - if isinstance(image, torch.Tensor): - if not isinstance(mask, torch.Tensor): - raise TypeError( - f"`image` is a torch.Tensor but `mask` (type: {type(mask)} is not" - ) - - # Batch single image - if image.ndim == 3: - assert ( - image.shape[0] == 3 - ), "Image outside a batch should be of shape (3, H, W)" - image = image.unsqueeze(0) - - # Batch and add channel dim for single mask - if mask.ndim == 2: - mask = mask.unsqueeze(0).unsqueeze(0) - - # Batch single mask or add channel dim - if mask.ndim == 3: - # Single batched mask, no channel dim or single mask not batched but channel dim - if mask.shape[0] == 1: - mask = mask.unsqueeze(0) - - # Batched masks no channel dim - else: - mask = mask.unsqueeze(1) - - assert ( - image.ndim == 4 and mask.ndim == 4 - ), "Image and Mask must have 4 dimensions" - assert ( - image.shape[-2:] == mask.shape[-2:] - ), "Image and Mask must have the same spatial dimensions" - assert ( - image.shape[0] == mask.shape[0] - ), "Image and Mask must have the same batch size" - - # Check image is in [-1, 1] - if image.min() < -1 or image.max() > 1: - raise ValueError("Image should be in [-1, 1] range") - - # Check mask is in [0, 1] - if mask.min() < 0 or mask.max() > 1: - raise ValueError("Mask should be in [0, 1] range") - - # Binarize mask - mask[mask < 0.5] = 0 - mask[mask >= 0.5] = 1 - - # Image as float32 - image = image.to(dtype=torch.float32) - elif isinstance(mask, torch.Tensor): - raise TypeError( - f"`mask` is a torch.Tensor but `image` (type: {type(image)} is not" - ) - else: - # preprocess image - if isinstance(image, (PIL.Image.Image, np.ndarray)): - image = [image] - - if isinstance(image, list) and isinstance(image[0], PIL.Image.Image): - image = [np.array(i.convert("RGB"))[None, :] for i in image] - image = np.concatenate(image, axis=0) - elif isinstance(image, list) and isinstance(image[0], np.ndarray): - image = np.concatenate([i[None, :] for i in image], axis=0) - - image = image.transpose(0, 3, 1, 2) - image = torch.from_numpy(image).to(dtype=torch.float32) / 127.5 - 1.0 - - # preprocess mask - if isinstance(mask, (PIL.Image.Image, np.ndarray)): - mask = [mask] - - if isinstance(mask, list) and isinstance(mask[0], PIL.Image.Image): - mask = np.concatenate( - [np.array(m.convert("L"))[None, None, :] for m in mask], axis=0 - ) - mask = mask.astype(np.float32) / 255.0 - elif isinstance(mask, list) and isinstance(mask[0], np.ndarray): - mask = np.concatenate([m[None, None, :] for m in mask], axis=0) - - mask[mask < 0.5] = 0 - mask[mask >= 0.5] = 1 - mask = torch.from_numpy(mask) - - masked_image = image * (mask < 0.5) - - return mask, masked_image - - -class StableDiffusionControlNetInpaintPipeline( - StableDiffusionControlNetPipeline -): - r""" - Pipeline for text-guided image inpainting using Stable Diffusion with ControlNet guidance. - - This model inherits from [`StableDiffusionControlNetPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - controlnet ([`ControlNetModel`]): - Provides additional conditioning to the unet during the denoising process - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPFeatureExtractor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - - def prepare_mask_latents( - self, - mask, - masked_image, - batch_size, - height, - width, - dtype, - device, - generator, - do_classifier_free_guidance, - ): - # resize the mask to latents shape as we concatenate the mask to the latents - # we do that before converting to dtype to avoid breaking in case we're using cpu_offload - # and half precision - mask = torch.nn.functional.interpolate( - mask, - size=( - height // self.vae_scale_factor, - width // self.vae_scale_factor, - ), - ) - mask = mask.to(device=device, dtype=dtype) - - masked_image = masked_image.to(device=device, dtype=dtype) - - # encode the mask image into latents space so we can concatenate it to the latents - if isinstance(generator, list): - masked_image_latents = [ - self.vae.encode(masked_image[i : i + 1]).latent_dist.sample( - generator=generator[i] - ) - for i in range(batch_size) - ] - masked_image_latents = torch.cat(masked_image_latents, dim=0) - else: - masked_image_latents = self.vae.encode( - masked_image - ).latent_dist.sample(generator=generator) - masked_image_latents = ( - self.vae.config.scaling_factor * masked_image_latents - ) - - # duplicate mask and masked_image_latents for each generation per prompt, using mps friendly method - if mask.shape[0] < batch_size: - if not batch_size % mask.shape[0] == 0: - raise ValueError( - "The passed mask and the required batch size don't match. Masks are supposed to be duplicated to" - f" a total batch size of {batch_size}, but {mask.shape[0]} masks were passed. Make sure the number" - " of masks that you pass is divisible by the total requested batch size." - ) - mask = mask.repeat(batch_size // mask.shape[0], 1, 1, 1) - if masked_image_latents.shape[0] < batch_size: - if not batch_size % masked_image_latents.shape[0] == 0: - raise ValueError( - "The passed images and the required batch size don't match. Images are supposed to be duplicated" - f" to a total batch size of {batch_size}, but {masked_image_latents.shape[0]} images were passed." - " Make sure the number of images that you pass is divisible by the total requested batch size." - ) - masked_image_latents = masked_image_latents.repeat( - batch_size // masked_image_latents.shape[0], 1, 1, 1 - ) - - mask = torch.cat([mask] * 2) if do_classifier_free_guidance else mask - masked_image_latents = ( - torch.cat([masked_image_latents] * 2) - if do_classifier_free_guidance - else masked_image_latents - ) - - # aligning device to prevent device errors when concating it with the latent model input - masked_image_latents = masked_image_latents.to( - device=device, dtype=dtype - ) - return mask, masked_image_latents - - @torch.no_grad() - @replace_example_docstring(EXAMPLE_DOC_STRING) - def __call__( - self, - prompt: Union[str, List[str]] = None, - image: Union[torch.FloatTensor, PIL.Image.Image] = None, - control_image: Union[ - torch.FloatTensor, - PIL.Image.Image, - List[torch.FloatTensor], - List[PIL.Image.Image], - ] = None, - mask_image: Union[torch.FloatTensor, PIL.Image.Image] = None, - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[ - Union[torch.Generator, List[torch.Generator]] - ] = None, - latents: Optional[torch.FloatTensor] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[ - Callable[[int, int, torch.FloatTensor], None] - ] = None, - callback_steps: int = 1, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - controlnet_conditioning_scale: float = 1.0, - ): - r""" - Function invoked when calling the pipeline for generation. - Args: - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. - instead. - image (`PIL.Image.Image`): - `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will - be masked out with `mask_image` and repainted according to `prompt`. - control_image (`torch.FloatTensor`, `PIL.Image.Image`, `List[torch.FloatTensor]` or `List[PIL.Image.Image]`): - The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If - the type is specified as `Torch.FloatTensor`, it is passed to ControlNet as is. PIL.Image.Image` can - also be accepted as an image. The control image is automatically resized to fit the output image. - mask_image (`PIL.Image.Image`): - `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be - repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted - to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L) - instead of 3, so the expected shape would be `(B, H, W, 1)`. - height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead. - Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - cross_attention_kwargs (`dict`, *optional*): - A kwargs dictionary that if specified is passed along to the `AttnProcessor` as defined under - `self.processor` in - [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py). - controlnet_conditioning_scale (`float`, *optional*, defaults to 1.0): - The outputs of the controlnet are multiplied by `controlnet_conditioning_scale` before they are added - to the residual in the original unet. - Examples: - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - # 0. Default height and width to unet - height, width = self._default_height_width(height, width, control_image) - - # 1. Check inputs. Raise error if not correct - self.check_inputs( - prompt, - control_image, - height, - width, - callback_steps, - negative_prompt, - prompt_embeds, - negative_prompt_embeds, - ) - - # 2. Define call parameters - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - prompt_embeds = self._encode_prompt( - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - ) - - # 4. Prepare image - control_image = self.prepare_image( - control_image, - width, - height, - batch_size * num_images_per_prompt, - num_images_per_prompt, - device, - self.controlnet.dtype, - ) - - if do_classifier_free_guidance: - control_image = torch.cat([control_image] * 2) - - # 5. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 6. Prepare latent variables - num_channels_latents = self.controlnet.in_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - prompt_embeds.dtype, - device, - generator, - latents, - ) - - # EXTRA: prepare mask latents - mask, masked_image = prepare_mask_and_masked_image(image, mask_image) - mask, masked_image_latents = self.prepare_mask_latents( - mask, - masked_image, - batch_size * num_images_per_prompt, - height, - width, - prompt_embeds.dtype, - device, - generator, - do_classifier_free_guidance, - ) - - # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 8. Denoising loop - num_warmup_steps = ( - len(timesteps) - num_inference_steps * self.scheduler.order - ) - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = ( - torch.cat([latents] * 2) - if do_classifier_free_guidance - else latents - ) - latent_model_input = self.scheduler.scale_model_input( - latent_model_input, t - ) - - down_block_res_samples, mid_block_res_sample = self.controlnet( - latent_model_input, - t, - encoder_hidden_states=prompt_embeds, - controlnet_cond=control_image, - return_dict=False, - ) - - down_block_res_samples = [ - down_block_res_sample * controlnet_conditioning_scale - for down_block_res_sample in down_block_res_samples - ] - mid_block_res_sample *= controlnet_conditioning_scale - - # predict the noise residual - latent_model_input = torch.cat( - [latent_model_input, mask, masked_image_latents], dim=1 - ) - noise_pred = self.unet( - latent_model_input, - t, - encoder_hidden_states=prompt_embeds, - cross_attention_kwargs=cross_attention_kwargs, - down_block_additional_residuals=down_block_res_samples, - mid_block_additional_residual=mid_block_res_sample, - ).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * ( - noise_pred_text - noise_pred_uncond - ) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step( - noise_pred, t, latents, **extra_step_kwargs - ).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ( - (i + 1) > num_warmup_steps - and (i + 1) % self.scheduler.order == 0 - ): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # If we do sequential model offloading, let's offload unet and controlnet - # manually for max memory savings - if ( - hasattr(self, "final_offload_hook") - and self.final_offload_hook is not None - ): - self.unet.to("cpu") - self.controlnet.to("cpu") - torch.cuda.empty_cache() - - if output_type == "latent": - image = latents - has_nsfw_concept = None - elif output_type == "pil": - # 8. Post-processing - image = self.decode_latents(latents) - - # 9. Run safety checker - image, has_nsfw_concept = self.run_safety_checker( - image, device, prompt_embeds.dtype - ) - - # 10. Convert to PIL - image = self.numpy_to_pil(image) - else: - # 8. Post-processing - image = self.decode_latents(latents) - - # 9. Run safety checker - image, has_nsfw_concept = self.run_safety_checker( - image, device, prompt_embeds.dtype - ) - - # Offload last model to CPU - if ( - hasattr(self, "final_offload_hook") - and self.final_offload_hook is not None - ): - self.final_offload_hook.offload() - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput( - images=image, nsfw_content_detected=has_nsfw_concept - ) diff --git a/spaces/sam-hq-team/sam-hq/GroundingDINO/groundingdino/util/inference.py b/spaces/sam-hq-team/sam-hq/GroundingDINO/groundingdino/util/inference.py deleted file mode 100644 index 8168b96ca51e6e494c7c675c2f4a610e21b095d6..0000000000000000000000000000000000000000 --- a/spaces/sam-hq-team/sam-hq/GroundingDINO/groundingdino/util/inference.py +++ /dev/null @@ -1,98 +0,0 @@ -from typing import Tuple, List - -import cv2 -import numpy as np -import supervision as sv -import torch -from PIL import Image -from torchvision.ops import box_convert - -import groundingdino.datasets.transforms as T -from groundingdino.models import build_model -from groundingdino.util.misc import clean_state_dict -from groundingdino.util.slconfig import SLConfig -from groundingdino.util.utils import get_phrases_from_posmap - - -def preprocess_caption(caption: str) -> str: - result = caption.lower().strip() - if result.endswith("."): - return result - return result + "." - - -def load_model(model_config_path: str, model_checkpoint_path: str, device: str = "cuda"): - args = SLConfig.fromfile(model_config_path) - args.device = device - model = build_model(args) - checkpoint = torch.load(model_checkpoint_path, map_location="cpu") - model.load_state_dict(clean_state_dict(checkpoint["model"]), strict=False) - model.eval() - return model - - -def load_image(image_path: str) -> Tuple[np.array, torch.Tensor]: - transform = T.Compose( - [ - T.RandomResize([800], max_size=1333), - T.ToTensor(), - T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]), - ] - ) - image_source = Image.open(image_path).convert("RGB") - image = np.asarray(image_source) - image_transformed, _ = transform(image_source, None) - return image, image_transformed - - -def predict( - model, - image: torch.Tensor, - caption: str, - box_threshold: float, - text_threshold: float, - device: str = "cuda" -) -> Tuple[torch.Tensor, torch.Tensor, List[str]]: - caption = preprocess_caption(caption=caption) - - model = model.to(device) - image = image.to(device) - - with torch.no_grad(): - outputs = model(image[None], captions=[caption]) - - prediction_logits = outputs["pred_logits"].cpu().sigmoid()[0] # prediction_logits.shape = (nq, 256) - prediction_boxes = outputs["pred_boxes"].cpu()[0] # prediction_boxes.shape = (nq, 4) - - mask = prediction_logits.max(dim=1)[0] > box_threshold - logits = prediction_logits[mask] # logits.shape = (n, 256) - boxes = prediction_boxes[mask] # boxes.shape = (n, 4) - - tokenizer = model.tokenizer - tokenized = tokenizer(caption) - - phrases = [ - get_phrases_from_posmap(logit > text_threshold, tokenized, tokenizer).replace('.', '') - for logit - in logits - ] - - return boxes, logits.max(dim=1)[0], phrases - - -def annotate(image_source: np.ndarray, boxes: torch.Tensor, logits: torch.Tensor, phrases: List[str]) -> np.ndarray: - h, w, _ = image_source.shape - boxes = boxes * torch.Tensor([w, h, w, h]) - xyxy = box_convert(boxes=boxes, in_fmt="cxcywh", out_fmt="xyxy").numpy() - detections = sv.Detections(xyxy=xyxy) - - labels = [ - f"{phrase} {logit:.2f}" - for phrase, logit - in zip(phrases, logits) - ] - - box_annotator = sv.BoxAnnotator() - annotated_frame = cv2.cvtColor(image_source, cv2.COLOR_RGB2BGR) - annotated_frame = box_annotator.annotate(scene=annotated_frame, detections=detections, labels=labels) - return annotated_frame diff --git a/spaces/sarinam/speaker-anonymization/IMSToucan/InferenceInterfaces/AnonFastSpeech2.py b/spaces/sarinam/speaker-anonymization/IMSToucan/InferenceInterfaces/AnonFastSpeech2.py deleted file mode 100644 index 2560e2a8b891773ff4c4f7a608b89b8ac4518194..0000000000000000000000000000000000000000 --- a/spaces/sarinam/speaker-anonymization/IMSToucan/InferenceInterfaces/AnonFastSpeech2.py +++ /dev/null @@ -1,91 +0,0 @@ -import librosa.display as lbd -import matplotlib.pyplot as plt -import soundfile -import torch - -from .InferenceArchitectures.InferenceFastSpeech2 import FastSpeech2 -from .InferenceArchitectures.InferenceHiFiGAN import HiFiGANGenerator -from ..Preprocessing.ArticulatoryCombinedTextFrontend import ArticulatoryCombinedTextFrontend -from ..Preprocessing.ArticulatoryCombinedTextFrontend import get_language_id - - -class AnonFastSpeech2(torch.nn.Module): - - def __init__(self, device: str, path_to_hifigan_model: str, path_to_fastspeech_model: str): - """ - Args: - device: Device to run on. CPU is feasible, still faster than real-time, but a GPU is significantly faster. - path_to_hifigan_model: Path to the vocoder model, including filename and suffix. - path_to_fastspeech_model: Path to the synthesis model, including filename and suffix. - - """ - super().__init__() - language = "en" - self.device = device - self.text2phone = ArticulatoryCombinedTextFrontend(language=language, add_silence_to_end=True) - checkpoint = torch.load(path_to_fastspeech_model, map_location='cpu') - self.phone2mel = FastSpeech2(weights=checkpoint["model"], lang_embs=None).to(torch.device(device)) - self.mel2wav = HiFiGANGenerator(path_to_weights=path_to_hifigan_model).to(torch.device(device)) - self.default_utterance_embedding = checkpoint["default_emb"].to(self.device) - self.phone2mel.eval() - self.mel2wav.eval() - self.lang_id = get_language_id(language) - self.to(torch.device(device)) - - def forward(self, text, view=False, text_is_phonemes=False): - """ - Args: - text: The text that the TTS should convert to speech - view: Boolean flag whether to produce and display a graphic showing the generated audio - text_is_phonemes: Boolean flag whether the text parameter contains phonemes (True) or graphemes (False) - - Returns: - 48kHz waveform as 1d tensor - - """ - with torch.no_grad(): - phones = self.text2phone.string_to_tensor(text, input_phonemes=text_is_phonemes).to(torch.device(self.device)) - mel, durations, pitch, energy = self.phone2mel(phones, - return_duration_pitch_energy=True, - utterance_embedding=self.default_utterance_embedding) - mel = mel.transpose(0, 1) - wave = self.mel2wav(mel) - if view: - from Utility.utils import cumsum_durations - fig, ax = plt.subplots(nrows=2, ncols=1) - ax[0].plot(wave.cpu().numpy()) - lbd.specshow(mel.cpu().numpy(), - ax=ax[1], - sr=16000, - cmap='GnBu', - y_axis='mel', - x_axis=None, - hop_length=256) - ax[0].yaxis.set_visible(False) - ax[1].yaxis.set_visible(False) - duration_splits, label_positions = cumsum_durations(durations.cpu().numpy()) - ax[1].set_xticks(duration_splits, minor=True) - ax[1].xaxis.grid(True, which='minor') - ax[1].set_xticks(label_positions, minor=False) - ax[1].set_xticklabels(self.text2phone.get_phone_string(text)) - ax[0].set_title(text) - plt.subplots_adjust(left=0.05, bottom=0.1, right=0.95, top=.9, wspace=0.0, hspace=0.0) - plt.show() - return wave - - def anonymize_to_file(self, text: str, text_is_phonemes: bool, target_speaker_embedding: torch.tensor, path_to_result_file: str): - """ - Args: - text: The text that the TTS should convert to speech - text_is_phonemes: Boolean flag whether the text parameter contains phonemes (True) or graphemes (False) - target_speaker_embedding: The speaker embedding that should be used for the produced speech - path_to_result_file: The path to the location where the resulting speech should be saved (including the filename and .wav suffix) - - """ - - assert text.strip() != "" - assert path_to_result_file.endswith(".wav") - - self.default_utterance_embedding = target_speaker_embedding.to(self.device) - wav = self(text=text, text_is_phonemes=text_is_phonemes) - soundfile.write(file=path_to_result_file, data=wav.cpu().numpy(), samplerate=48000) diff --git a/spaces/scedlatioru/img-to-music/example/HD Online Player (Ek Villain 1080p Bluray Movie Downlo).md b/spaces/scedlatioru/img-to-music/example/HD Online Player (Ek Villain 1080p Bluray Movie Downlo).md deleted file mode 100644 index 803be3e61b848138c70524357a970683c60d8ad0..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/HD Online Player (Ek Villain 1080p Bluray Movie Downlo).md +++ /dev/null @@ -1,16 +0,0 @@ -

                HD Online Player (Ek Villain 1080p Bluray Movie Downlo)


                Downloadhttps://gohhs.com/2uEzmi



                - -The two have a daughter, Ranveer's first child with Aisha, born when his wife was only twenty one years old. She was not happy with her marriage, and was not prepared to move in with Guru. Aisha Singh "Aisha" is an Indian film and television actress who is a famous actor and model. However, they soon get married. Aisha Singh was born on 7 September, in New Delhi, India. - -Watch Ek Villain - Hindi Thriller full movie on Disney+ Hotstar now. Aisha's father is a police officer. Guru gets injured in an encounter in which an armed squad consisting of both Rajasthan and Delhi police surrounded Guru's home in the early hours of the day.. An Indian television actress and producer, who is also a trained classical dancer, Miss Aisha Singh is the first female Indian artiste to enter and win the Eurovision Song Contest as a music director of the song Gagan mein and win the title of Miss Universe. - -Top Trending News - -Guru, a goon, marries Aisha and decides to make a fresh start.. Watch Ek Villain - Hindi Thriller full movie on Disney+ Hotstar now. The two have a daughter, Ranveer's first child with Aisha, born when his wife was only twenty one years old. She was not happy with her marriage, and was not prepared to move in with Guru. Aisha Singh "Aisha" is an Indian film and television actress who is a famous actor and model. However, they soon get married. Aisha Singh was born on 7 September, in New Delhi, India. - -Watch Ek Villain - Hindi Thriller full movie on Disney+ Hotstar now. The two have a daughter, Ranveer's first child with Aisha, born when his wife was only twenty one years old. She was not happy with her marriage, and was not prepared to move in with Guru. Aisha Singh "Aisha" is an Indian film and television actress who is a famous actor and model. However, they soon get married. Aisha Singh was born on 7 September, in New Delhi, India. - -Watch Ek Villain - Hindi Thriller full movie on Disney+ Hotstar now. Aisha's father is a police officer. Guru gets injured in an encounter in which an armed squad consisting of both Rajasthan and Delhi police surrounded Guru's home in the early hours of the day.. An Indian television actress and 4fefd39f24
                -
                -
                -

                diff --git a/spaces/seanshahkarami/clip-explorer/README.md b/spaces/seanshahkarami/clip-explorer/README.md deleted file mode 100644 index df64a9d6f528002d23c2b195d9701b4b47ca1741..0000000000000000000000000000000000000000 --- a/spaces/seanshahkarami/clip-explorer/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Clip Explorer -emoji: 🏁 -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/shahukareem/Wav2Vec2-Large-XLSR-53-Dhivehi/README.md b/spaces/shahukareem/Wav2Vec2-Large-XLSR-53-Dhivehi/README.md deleted file mode 100644 index c5e908d5dc5aff663d90d6669e78c0ffec765ce4..0000000000000000000000000000000000000000 --- a/spaces/shahukareem/Wav2Vec2-Large-XLSR-53-Dhivehi/README.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: Wav2Vec2 Large XLSR 53 Dhivehi -emoji: ⚡ -colorFrom: gray -colorTo: red -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/shencc/gpt/crazy_functions/test_project/cpp/cppipc/policy.h b/spaces/shencc/gpt/crazy_functions/test_project/cpp/cppipc/policy.h deleted file mode 100644 index f88ab5d8cb343f97026966b402eaeed8831e356a..0000000000000000000000000000000000000000 --- a/spaces/shencc/gpt/crazy_functions/test_project/cpp/cppipc/policy.h +++ /dev/null @@ -1,25 +0,0 @@ -#pragma once - -#include - -#include "libipc/def.h" -#include "libipc/prod_cons.h" - -#include "libipc/circ/elem_array.h" - -namespace ipc { -namespace policy { - -template